Oct  9 05:00:22 np0005478302 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  9 05:00:22 np0005478302 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  9 05:00:22 np0005478302 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 05:00:22 np0005478302 kernel: BIOS-provided physical RAM map:
Oct  9 05:00:22 np0005478302 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  9 05:00:22 np0005478302 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  9 05:00:22 np0005478302 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  9 05:00:22 np0005478302 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Oct  9 05:00:22 np0005478302 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Oct  9 05:00:22 np0005478302 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Oct  9 05:00:22 np0005478302 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Oct  9 05:00:22 np0005478302 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  9 05:00:22 np0005478302 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  9 05:00:22 np0005478302 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Oct  9 05:00:22 np0005478302 kernel: NX (Execute Disable) protection: active
Oct  9 05:00:22 np0005478302 kernel: APIC: Static calls initialized
Oct  9 05:00:22 np0005478302 kernel: SMBIOS 2.8 present.
Oct  9 05:00:22 np0005478302 kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Oct  9 05:00:22 np0005478302 kernel: Hypervisor detected: KVM
Oct  9 05:00:22 np0005478302 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  9 05:00:22 np0005478302 kernel: kvm-clock: using sched offset of 3349922997 cycles
Oct  9 05:00:22 np0005478302 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  9 05:00:22 np0005478302 kernel: tsc: Detected 2445.406 MHz processor
Oct  9 05:00:22 np0005478302 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Oct  9 05:00:22 np0005478302 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  9 05:00:22 np0005478302 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  9 05:00:22 np0005478302 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Oct  9 05:00:22 np0005478302 kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Oct  9 05:00:22 np0005478302 kernel: Using GB pages for direct mapping
Oct  9 05:00:22 np0005478302 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  9 05:00:22 np0005478302 kernel: ACPI: Early table checksum verification disabled
Oct  9 05:00:22 np0005478302 kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Oct  9 05:00:22 np0005478302 kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:22 np0005478302 kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:22 np0005478302 kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:22 np0005478302 kernel: ACPI: FACS 0x000000007FFDFC80 000040
Oct  9 05:00:22 np0005478302 kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:22 np0005478302 kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:22 np0005478302 kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:22 np0005478302 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Oct  9 05:00:22 np0005478302 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Oct  9 05:00:22 np0005478302 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Oct  9 05:00:22 np0005478302 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Oct  9 05:00:22 np0005478302 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Oct  9 05:00:22 np0005478302 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Oct  9 05:00:22 np0005478302 kernel: No NUMA configuration found
Oct  9 05:00:22 np0005478302 kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Oct  9 05:00:22 np0005478302 kernel: NODE_DATA(0) allocated [mem 0x27ffd3000-0x27fffdfff]
Oct  9 05:00:22 np0005478302 kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Oct  9 05:00:22 np0005478302 kernel: Zone ranges:
Oct  9 05:00:22 np0005478302 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  9 05:00:22 np0005478302 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  9 05:00:22 np0005478302 kernel:  Normal   [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 05:00:22 np0005478302 kernel:  Device   empty
Oct  9 05:00:22 np0005478302 kernel: Movable zone start for each node
Oct  9 05:00:22 np0005478302 kernel: Early memory node ranges
Oct  9 05:00:22 np0005478302 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  9 05:00:22 np0005478302 kernel:  node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Oct  9 05:00:22 np0005478302 kernel:  node   0: [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 05:00:22 np0005478302 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Oct  9 05:00:22 np0005478302 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  9 05:00:22 np0005478302 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  9 05:00:22 np0005478302 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  9 05:00:22 np0005478302 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  9 05:00:22 np0005478302 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  9 05:00:22 np0005478302 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  9 05:00:22 np0005478302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  9 05:00:22 np0005478302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  9 05:00:22 np0005478302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  9 05:00:22 np0005478302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  9 05:00:22 np0005478302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  9 05:00:22 np0005478302 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  9 05:00:22 np0005478302 kernel: TSC deadline timer available
Oct  9 05:00:22 np0005478302 kernel: CPU topo: Max. logical packages:   4
Oct  9 05:00:22 np0005478302 kernel: CPU topo: Max. logical dies:       4
Oct  9 05:00:22 np0005478302 kernel: CPU topo: Max. dies per package:   1
Oct  9 05:00:22 np0005478302 kernel: CPU topo: Max. threads per core:   1
Oct  9 05:00:22 np0005478302 kernel: CPU topo: Num. cores per package:     1
Oct  9 05:00:22 np0005478302 kernel: CPU topo: Num. threads per package:   1
Oct  9 05:00:22 np0005478302 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Oct  9 05:00:22 np0005478302 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  9 05:00:22 np0005478302 kernel: kvm-guest: KVM setup pv remote TLB flush
Oct  9 05:00:22 np0005478302 kernel: kvm-guest: setup PV sched yield
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  9 05:00:22 np0005478302 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  9 05:00:22 np0005478302 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Oct  9 05:00:22 np0005478302 kernel: Booting paravirtualized kernel on KVM
Oct  9 05:00:22 np0005478302 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  9 05:00:22 np0005478302 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Oct  9 05:00:22 np0005478302 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Oct  9 05:00:22 np0005478302 kernel: kvm-guest: PV spinlocks enabled
Oct  9 05:00:22 np0005478302 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 05:00:22 np0005478302 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  9 05:00:22 np0005478302 kernel: random: crng init done
Oct  9 05:00:22 np0005478302 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: Fallback order for Node 0: 0 
Oct  9 05:00:22 np0005478302 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  9 05:00:22 np0005478302 kernel: Policy zone: Normal
Oct  9 05:00:22 np0005478302 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  9 05:00:22 np0005478302 kernel: software IO TLB: area num 4.
Oct  9 05:00:22 np0005478302 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Oct  9 05:00:22 np0005478302 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  9 05:00:22 np0005478302 kernel: ftrace: allocated 193 pages with 3 groups
Oct  9 05:00:22 np0005478302 kernel: Dynamic Preempt: voluntary
Oct  9 05:00:22 np0005478302 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  9 05:00:22 np0005478302 kernel: rcu: #011RCU event tracing is enabled.
Oct  9 05:00:22 np0005478302 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Oct  9 05:00:22 np0005478302 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  9 05:00:22 np0005478302 kernel: #011Rude variant of Tasks RCU enabled.
Oct  9 05:00:22 np0005478302 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  9 05:00:22 np0005478302 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  9 05:00:22 np0005478302 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Oct  9 05:00:22 np0005478302 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 05:00:22 np0005478302 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 05:00:22 np0005478302 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 05:00:22 np0005478302 kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Oct  9 05:00:22 np0005478302 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  9 05:00:22 np0005478302 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  9 05:00:22 np0005478302 kernel: Console: colour VGA+ 80x25
Oct  9 05:00:22 np0005478302 kernel: printk: console [ttyS0] enabled
Oct  9 05:00:22 np0005478302 kernel: ACPI: Core revision 20230331
Oct  9 05:00:22 np0005478302 kernel: APIC: Switch to symmetric I/O mode setup
Oct  9 05:00:22 np0005478302 kernel: x2apic enabled
Oct  9 05:00:22 np0005478302 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  9 05:00:22 np0005478302 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Oct  9 05:00:22 np0005478302 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Oct  9 05:00:22 np0005478302 kernel: kvm-guest: setup PV IPIs
Oct  9 05:00:22 np0005478302 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  9 05:00:22 np0005478302 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Oct  9 05:00:22 np0005478302 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  9 05:00:22 np0005478302 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  9 05:00:22 np0005478302 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  9 05:00:22 np0005478302 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  9 05:00:22 np0005478302 kernel: Spectre V2 : Mitigation: Retpolines
Oct  9 05:00:22 np0005478302 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  9 05:00:22 np0005478302 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Oct  9 05:00:22 np0005478302 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  9 05:00:22 np0005478302 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  9 05:00:22 np0005478302 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  9 05:00:22 np0005478302 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  9 05:00:22 np0005478302 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  9 05:00:22 np0005478302 kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Oct  9 05:00:22 np0005478302 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  9 05:00:22 np0005478302 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  9 05:00:22 np0005478302 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  9 05:00:22 np0005478302 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Oct  9 05:00:22 np0005478302 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  9 05:00:22 np0005478302 kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Oct  9 05:00:22 np0005478302 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Oct  9 05:00:22 np0005478302 kernel: Freeing SMP alternatives memory: 40K
Oct  9 05:00:22 np0005478302 kernel: pid_max: default: 32768 minimum: 301
Oct  9 05:00:22 np0005478302 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  9 05:00:22 np0005478302 kernel: landlock: Up and running.
Oct  9 05:00:22 np0005478302 kernel: Yama: becoming mindful.
Oct  9 05:00:22 np0005478302 kernel: SELinux:  Initializing.
Oct  9 05:00:22 np0005478302 kernel: LSM support for eBPF active
Oct  9 05:00:22 np0005478302 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Oct  9 05:00:22 np0005478302 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  9 05:00:22 np0005478302 kernel: ... version:                0
Oct  9 05:00:22 np0005478302 kernel: ... bit width:              48
Oct  9 05:00:22 np0005478302 kernel: ... generic registers:      6
Oct  9 05:00:22 np0005478302 kernel: ... value mask:             0000ffffffffffff
Oct  9 05:00:22 np0005478302 kernel: ... max period:             00007fffffffffff
Oct  9 05:00:22 np0005478302 kernel: ... fixed-purpose events:   0
Oct  9 05:00:22 np0005478302 kernel: ... event mask:             000000000000003f
Oct  9 05:00:22 np0005478302 kernel: signal: max sigframe size: 3376
Oct  9 05:00:22 np0005478302 kernel: rcu: Hierarchical SRCU implementation.
Oct  9 05:00:22 np0005478302 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  9 05:00:22 np0005478302 kernel: smp: Bringing up secondary CPUs ...
Oct  9 05:00:22 np0005478302 kernel: smpboot: x86: Booting SMP configuration:
Oct  9 05:00:22 np0005478302 kernel: .... node  #0, CPUs:      #1 #2 #3
Oct  9 05:00:22 np0005478302 kernel: smp: Brought up 1 node, 4 CPUs
Oct  9 05:00:22 np0005478302 kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Oct  9 05:00:22 np0005478302 kernel: node 0 deferred pages initialised in 16ms
Oct  9 05:00:22 np0005478302 kernel: Memory: 7767884K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 615464K reserved, 0K cma-reserved)
Oct  9 05:00:22 np0005478302 kernel: devtmpfs: initialized
Oct  9 05:00:22 np0005478302 kernel: x86/mm: Memory block size: 128MB
Oct  9 05:00:22 np0005478302 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  9 05:00:22 np0005478302 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: pinctrl core: initialized pinctrl subsystem
Oct  9 05:00:22 np0005478302 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  9 05:00:22 np0005478302 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  9 05:00:22 np0005478302 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  9 05:00:22 np0005478302 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  9 05:00:22 np0005478302 kernel: audit: initializing netlink subsys (disabled)
Oct  9 05:00:22 np0005478302 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  9 05:00:22 np0005478302 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  9 05:00:22 np0005478302 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  9 05:00:22 np0005478302 kernel: audit: type=2000 audit(1760000422.263:1): state=initialized audit_enabled=0 res=1
Oct  9 05:00:22 np0005478302 kernel: cpuidle: using governor menu
Oct  9 05:00:22 np0005478302 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  9 05:00:22 np0005478302 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Oct  9 05:00:22 np0005478302 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Oct  9 05:00:22 np0005478302 kernel: PCI: Using configuration type 1 for base access
Oct  9 05:00:22 np0005478302 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  9 05:00:22 np0005478302 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  9 05:00:22 np0005478302 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  9 05:00:22 np0005478302 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  9 05:00:22 np0005478302 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  9 05:00:22 np0005478302 kernel: Demotion targets for Node 0: null
Oct  9 05:00:22 np0005478302 kernel: cryptd: max_cpu_qlen set to 1000
Oct  9 05:00:22 np0005478302 kernel: ACPI: Added _OSI(Module Device)
Oct  9 05:00:22 np0005478302 kernel: ACPI: Added _OSI(Processor Device)
Oct  9 05:00:22 np0005478302 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  9 05:00:22 np0005478302 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  9 05:00:22 np0005478302 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  9 05:00:22 np0005478302 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  9 05:00:22 np0005478302 kernel: ACPI: Interpreter enabled
Oct  9 05:00:22 np0005478302 kernel: ACPI: PM: (supports S0 S5)
Oct  9 05:00:22 np0005478302 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  9 05:00:22 np0005478302 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  9 05:00:22 np0005478302 kernel: PCI: Using E820 reservations for host bridge windows
Oct  9 05:00:22 np0005478302 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  9 05:00:22 np0005478302 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  9 05:00:22 np0005478302 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Oct  9 05:00:22 np0005478302 kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Oct  9 05:00:22 np0005478302 kernel: PCI host bridge to bus 0000:00
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:02: extended config space not accessible
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [1] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [2] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [3] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [4] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [5] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [6] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [7] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [8] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [9] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [10] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [11] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [12] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [13] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [14] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [15] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [16] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [17] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [18] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [19] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [20] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [21] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [22] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [23] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [24] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [25] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [26] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [27] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [28] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [29] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [30] registered
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [31] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  9 05:00:22 np0005478302 kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-2] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 05:00:22 np0005478302 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-3] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Oct  9 05:00:22 np0005478302 kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-4] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 05:00:22 np0005478302 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-5] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 05:00:22 np0005478302 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-6] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-7] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-8] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-9] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-10] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-11] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-12] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-13] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-14] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-15] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-16] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 05:00:22 np0005478302 kernel: acpiphp: Slot [0-17] registered
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Oct  9 05:00:22 np0005478302 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Oct  9 05:00:22 np0005478302 kernel: iommu: Default domain type: Translated
Oct  9 05:00:22 np0005478302 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  9 05:00:22 np0005478302 kernel: SCSI subsystem initialized
Oct  9 05:00:22 np0005478302 kernel: ACPI: bus type USB registered
Oct  9 05:00:22 np0005478302 kernel: usbcore: registered new interface driver usbfs
Oct  9 05:00:22 np0005478302 kernel: usbcore: registered new interface driver hub
Oct  9 05:00:22 np0005478302 kernel: usbcore: registered new device driver usb
Oct  9 05:00:22 np0005478302 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  9 05:00:22 np0005478302 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  9 05:00:22 np0005478302 kernel: PTP clock support registered
Oct  9 05:00:22 np0005478302 kernel: EDAC MC: Ver: 3.0.0
Oct  9 05:00:22 np0005478302 kernel: NetLabel: Initializing
Oct  9 05:00:22 np0005478302 kernel: NetLabel:  domain hash size = 128
Oct  9 05:00:22 np0005478302 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  9 05:00:22 np0005478302 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  9 05:00:22 np0005478302 kernel: PCI: Using ACPI for IRQ routing
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  9 05:00:22 np0005478302 kernel: vgaarb: loaded
Oct  9 05:00:22 np0005478302 kernel: clocksource: Switched to clocksource kvm-clock
Oct  9 05:00:22 np0005478302 kernel: VFS: Disk quotas dquot_6.6.0
Oct  9 05:00:22 np0005478302 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  9 05:00:22 np0005478302 kernel: pnp: PnP ACPI init
Oct  9 05:00:22 np0005478302 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Oct  9 05:00:22 np0005478302 kernel: pnp: PnP ACPI: found 5 devices
Oct  9 05:00:22 np0005478302 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  9 05:00:22 np0005478302 kernel: NET: Registered PF_INET protocol family
Oct  9 05:00:22 np0005478302 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  9 05:00:22 np0005478302 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 05:00:22 np0005478302 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  9 05:00:22 np0005478302 kernel: NET: Registered PF_XDP protocol family
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 05:00:22 np0005478302 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Oct  9 05:00:22 np0005478302 kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 05:00:22 np0005478302 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Oct  9 05:00:22 np0005478302 kernel: PCI: CLS 0 bytes, default 64
Oct  9 05:00:22 np0005478302 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  9 05:00:22 np0005478302 kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Oct  9 05:00:22 np0005478302 kernel: Trying to unpack rootfs image as initramfs...
Oct  9 05:00:22 np0005478302 kernel: ACPI: bus type thunderbolt registered
Oct  9 05:00:22 np0005478302 kernel: Initialise system trusted keyrings
Oct  9 05:00:22 np0005478302 kernel: Key type blacklist registered
Oct  9 05:00:22 np0005478302 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  9 05:00:22 np0005478302 kernel: zbud: loaded
Oct  9 05:00:22 np0005478302 kernel: integrity: Platform Keyring initialized
Oct  9 05:00:22 np0005478302 kernel: integrity: Machine keyring initialized
Oct  9 05:00:22 np0005478302 kernel: Freeing initrd memory: 86104K
Oct  9 05:00:22 np0005478302 kernel: NET: Registered PF_ALG protocol family
Oct  9 05:00:22 np0005478302 kernel: xor: automatically using best checksumming function   avx       
Oct  9 05:00:22 np0005478302 kernel: Key type asymmetric registered
Oct  9 05:00:22 np0005478302 kernel: Asymmetric key parser 'x509' registered
Oct  9 05:00:22 np0005478302 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  9 05:00:22 np0005478302 kernel: io scheduler mq-deadline registered
Oct  9 05:00:22 np0005478302 kernel: io scheduler kyber registered
Oct  9 05:00:22 np0005478302 kernel: io scheduler bfq registered
Oct  9 05:00:22 np0005478302 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Oct  9 05:00:22 np0005478302 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Oct  9 05:00:22 np0005478302 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Oct  9 05:00:22 np0005478302 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Oct  9 05:00:22 np0005478302 kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Oct  9 05:00:22 np0005478302 kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Oct  9 05:00:22 np0005478302 kernel: shpchp 0000:01:00.0: Slot initialization failed
Oct  9 05:00:22 np0005478302 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  9 05:00:22 np0005478302 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  9 05:00:22 np0005478302 kernel: ACPI: button: Power Button [PWRF]
Oct  9 05:00:22 np0005478302 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Oct  9 05:00:22 np0005478302 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  9 05:00:22 np0005478302 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  9 05:00:22 np0005478302 kernel: Non-volatile memory driver v1.3
Oct  9 05:00:22 np0005478302 kernel: rdac: device handler registered
Oct  9 05:00:22 np0005478302 kernel: hp_sw: device handler registered
Oct  9 05:00:22 np0005478302 kernel: emc: device handler registered
Oct  9 05:00:22 np0005478302 kernel: alua: device handler registered
Oct  9 05:00:22 np0005478302 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Oct  9 05:00:22 np0005478302 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Oct  9 05:00:22 np0005478302 kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Oct  9 05:00:22 np0005478302 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Oct  9 05:00:22 np0005478302 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  9 05:00:22 np0005478302 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  9 05:00:22 np0005478302 kernel: usb usb1: Product: UHCI Host Controller
Oct  9 05:00:22 np0005478302 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  9 05:00:22 np0005478302 kernel: usb usb1: SerialNumber: 0000:02:01.0
Oct  9 05:00:22 np0005478302 kernel: hub 1-0:1.0: USB hub found
Oct  9 05:00:22 np0005478302 kernel: hub 1-0:1.0: 2 ports detected
Oct  9 05:00:22 np0005478302 kernel: usbcore: registered new interface driver usbserial_generic
Oct  9 05:00:22 np0005478302 kernel: usbserial: USB Serial support registered for generic
Oct  9 05:00:22 np0005478302 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  9 05:00:22 np0005478302 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  9 05:00:22 np0005478302 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  9 05:00:22 np0005478302 kernel: mousedev: PS/2 mouse device common for all mice
Oct  9 05:00:22 np0005478302 kernel: rtc_cmos 00:03: RTC can wake from S4
Oct  9 05:00:22 np0005478302 kernel: rtc_cmos 00:03: registered as rtc0
Oct  9 05:00:22 np0005478302 kernel: rtc_cmos 00:03: setting system clock to 2025-10-09T09:00:22 UTC (1760000422)
Oct  9 05:00:22 np0005478302 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Oct  9 05:00:22 np0005478302 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  9 05:00:22 np0005478302 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  9 05:00:22 np0005478302 kernel: usbcore: registered new interface driver usbhid
Oct  9 05:00:22 np0005478302 kernel: usbhid: USB HID core driver
Oct  9 05:00:22 np0005478302 kernel: drop_monitor: Initializing network drop monitor service
Oct  9 05:00:22 np0005478302 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  9 05:00:22 np0005478302 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  9 05:00:22 np0005478302 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  9 05:00:22 np0005478302 kernel: Initializing XFRM netlink socket
Oct  9 05:00:22 np0005478302 kernel: NET: Registered PF_INET6 protocol family
Oct  9 05:00:22 np0005478302 kernel: Segment Routing with IPv6
Oct  9 05:00:22 np0005478302 kernel: NET: Registered PF_PACKET protocol family
Oct  9 05:00:22 np0005478302 kernel: mpls_gso: MPLS GSO support
Oct  9 05:00:22 np0005478302 kernel: IPI shorthand broadcast: enabled
Oct  9 05:00:22 np0005478302 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  9 05:00:22 np0005478302 kernel: AES CTR mode by8 optimization enabled
Oct  9 05:00:22 np0005478302 kernel: sched_clock: Marking stable (978002177, 141786344)->(1322199299, -202410778)
Oct  9 05:00:22 np0005478302 kernel: registered taskstats version 1
Oct  9 05:00:22 np0005478302 kernel: Loading compiled-in X.509 certificates
Oct  9 05:00:22 np0005478302 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 05:00:22 np0005478302 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  9 05:00:22 np0005478302 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  9 05:00:22 np0005478302 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  9 05:00:22 np0005478302 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  9 05:00:22 np0005478302 kernel: Demotion targets for Node 0: null
Oct  9 05:00:22 np0005478302 kernel: page_owner is disabled
Oct  9 05:00:22 np0005478302 kernel: Key type .fscrypt registered
Oct  9 05:00:22 np0005478302 kernel: Key type fscrypt-provisioning registered
Oct  9 05:00:22 np0005478302 kernel: Key type big_key registered
Oct  9 05:00:22 np0005478302 kernel: Key type encrypted registered
Oct  9 05:00:22 np0005478302 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  9 05:00:22 np0005478302 kernel: Loading compiled-in module X.509 certificates
Oct  9 05:00:22 np0005478302 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 05:00:22 np0005478302 kernel: ima: Allocated hash algorithm: sha256
Oct  9 05:00:22 np0005478302 kernel: ima: No architecture policies found
Oct  9 05:00:22 np0005478302 kernel: evm: Initialising EVM extended attributes:
Oct  9 05:00:22 np0005478302 kernel: evm: security.selinux
Oct  9 05:00:22 np0005478302 kernel: evm: security.SMACK64 (disabled)
Oct  9 05:00:22 np0005478302 kernel: evm: security.SMACK64EXEC (disabled)
Oct  9 05:00:22 np0005478302 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  9 05:00:22 np0005478302 kernel: evm: security.SMACK64MMAP (disabled)
Oct  9 05:00:22 np0005478302 kernel: evm: security.apparmor (disabled)
Oct  9 05:00:22 np0005478302 kernel: evm: security.ima
Oct  9 05:00:22 np0005478302 kernel: evm: security.capability
Oct  9 05:00:22 np0005478302 kernel: evm: HMAC attrs: 0x1
Oct  9 05:00:22 np0005478302 kernel: Running certificate verification RSA selftest
Oct  9 05:00:22 np0005478302 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  9 05:00:22 np0005478302 kernel: Running certificate verification ECDSA selftest
Oct  9 05:00:22 np0005478302 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  9 05:00:22 np0005478302 kernel: clk: Disabling unused clocks
Oct  9 05:00:22 np0005478302 kernel: Freeing unused decrypted memory: 2028K
Oct  9 05:00:22 np0005478302 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  9 05:00:22 np0005478302 kernel: Write protecting the kernel read-only data: 30720k
Oct  9 05:00:22 np0005478302 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  9 05:00:22 np0005478302 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  9 05:00:22 np0005478302 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  9 05:00:22 np0005478302 kernel: Run /init as init process
Oct  9 05:00:22 np0005478302 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 05:00:22 np0005478302 systemd: Detected virtualization kvm.
Oct  9 05:00:22 np0005478302 systemd: Detected architecture x86-64.
Oct  9 05:00:22 np0005478302 systemd: Running in initrd.
Oct  9 05:00:22 np0005478302 systemd: No hostname configured, using default hostname.
Oct  9 05:00:22 np0005478302 systemd: Hostname set to <localhost>.
Oct  9 05:00:22 np0005478302 systemd: Initializing machine ID from VM UUID.
Oct  9 05:00:22 np0005478302 systemd: Queued start job for default target Initrd Default Target.
Oct  9 05:00:22 np0005478302 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 05:00:22 np0005478302 systemd: Reached target Local Encrypted Volumes.
Oct  9 05:00:22 np0005478302 systemd: Reached target Initrd /usr File System.
Oct  9 05:00:22 np0005478302 systemd: Reached target Local File Systems.
Oct  9 05:00:22 np0005478302 systemd: Reached target Path Units.
Oct  9 05:00:22 np0005478302 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  9 05:00:22 np0005478302 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  9 05:00:22 np0005478302 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  9 05:00:22 np0005478302 kernel: usb 1-1: Manufacturer: QEMU
Oct  9 05:00:22 np0005478302 kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Oct  9 05:00:22 np0005478302 systemd: Reached target Slice Units.
Oct  9 05:00:22 np0005478302 systemd: Reached target Swaps.
Oct  9 05:00:22 np0005478302 systemd: Reached target Timer Units.
Oct  9 05:00:22 np0005478302 systemd: Listening on D-Bus System Message Bus Socket.
Oct  9 05:00:22 np0005478302 systemd: Listening on Journal Socket (/dev/log).
Oct  9 05:00:22 np0005478302 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  9 05:00:22 np0005478302 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Oct  9 05:00:22 np0005478302 systemd: Listening on Journal Socket.
Oct  9 05:00:22 np0005478302 systemd: Listening on udev Control Socket.
Oct  9 05:00:22 np0005478302 systemd: Listening on udev Kernel Socket.
Oct  9 05:00:22 np0005478302 systemd: Reached target Socket Units.
Oct  9 05:00:22 np0005478302 systemd: Starting Create List of Static Device Nodes...
Oct  9 05:00:22 np0005478302 systemd: Starting Journal Service...
Oct  9 05:00:22 np0005478302 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 05:00:22 np0005478302 systemd: Starting Apply Kernel Variables...
Oct  9 05:00:22 np0005478302 systemd: Starting Create System Users...
Oct  9 05:00:22 np0005478302 systemd: Starting Setup Virtual Console...
Oct  9 05:00:22 np0005478302 systemd: Finished Create List of Static Device Nodes.
Oct  9 05:00:22 np0005478302 systemd: Finished Apply Kernel Variables.
Oct  9 05:00:22 np0005478302 systemd: Finished Create System Users.
Oct  9 05:00:22 np0005478302 systemd-journald[283]: Journal started
Oct  9 05:00:22 np0005478302 systemd-journald[283]: Runtime Journal (/run/log/journal/c2ce88da801c421fa8d632aab8dfbba9) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:00:22 np0005478302 systemd-sysusers[287]: Creating group 'users' with GID 100.
Oct  9 05:00:22 np0005478302 systemd-sysusers[287]: Creating group 'dbus' with GID 81.
Oct  9 05:00:22 np0005478302 systemd-sysusers[287]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  9 05:00:22 np0005478302 systemd: Started Journal Service.
Oct  9 05:00:23 np0005478302 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 05:00:23 np0005478302 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 05:00:23 np0005478302 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 05:00:23 np0005478302 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 05:00:23 np0005478302 systemd[1]: Finished Setup Virtual Console.
Oct  9 05:00:23 np0005478302 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  9 05:00:23 np0005478302 systemd[1]: Starting dracut cmdline hook...
Oct  9 05:00:23 np0005478302 dracut-cmdline[300]: dracut-9 dracut-057-102.git20250818.el9
Oct  9 05:00:23 np0005478302 dracut-cmdline[300]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 05:00:23 np0005478302 systemd[1]: Finished dracut cmdline hook.
Oct  9 05:00:23 np0005478302 systemd[1]: Starting dracut pre-udev hook...
Oct  9 05:00:23 np0005478302 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  9 05:00:23 np0005478302 kernel: device-mapper: uevent: version 1.0.3
Oct  9 05:00:23 np0005478302 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  9 05:00:23 np0005478302 kernel: RPC: Registered named UNIX socket transport module.
Oct  9 05:00:23 np0005478302 kernel: RPC: Registered udp transport module.
Oct  9 05:00:23 np0005478302 kernel: RPC: Registered tcp transport module.
Oct  9 05:00:23 np0005478302 kernel: RPC: Registered tcp-with-tls transport module.
Oct  9 05:00:23 np0005478302 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  9 05:00:23 np0005478302 rpc.statd[415]: Version 2.5.4 starting
Oct  9 05:00:23 np0005478302 rpc.statd[415]: Initializing NSM state
Oct  9 05:00:23 np0005478302 rpc.idmapd[420]: Setting log level to 0
Oct  9 05:00:23 np0005478302 systemd[1]: Finished dracut pre-udev hook.
Oct  9 05:00:23 np0005478302 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 05:00:23 np0005478302 systemd-udevd[433]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 05:00:23 np0005478302 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 05:00:23 np0005478302 systemd[1]: Starting dracut pre-trigger hook...
Oct  9 05:00:23 np0005478302 systemd[1]: Finished dracut pre-trigger hook.
Oct  9 05:00:23 np0005478302 systemd[1]: Starting Coldplug All udev Devices...
Oct  9 05:00:23 np0005478302 systemd[1]: Created slice Slice /system/modprobe.
Oct  9 05:00:23 np0005478302 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 05:00:23 np0005478302 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 05:00:23 np0005478302 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 05:00:23 np0005478302 systemd[1]: Reached target Network.
Oct  9 05:00:23 np0005478302 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 05:00:23 np0005478302 systemd[1]: Starting dracut initqueue hook...
Oct  9 05:00:23 np0005478302 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 05:00:23 np0005478302 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 05:00:23 np0005478302 kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Oct  9 05:00:23 np0005478302 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  9 05:00:23 np0005478302 systemd-udevd[453]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:00:23 np0005478302 kernel: vda: vda1
Oct  9 05:00:23 np0005478302 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 05:00:23 np0005478302 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Oct  9 05:00:23 np0005478302 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Oct  9 05:00:23 np0005478302 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Oct  9 05:00:23 np0005478302 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Oct  9 05:00:23 np0005478302 kernel: scsi host0: ahci
Oct  9 05:00:23 np0005478302 kernel: scsi host1: ahci
Oct  9 05:00:23 np0005478302 kernel: scsi host2: ahci
Oct  9 05:00:23 np0005478302 kernel: scsi host3: ahci
Oct  9 05:00:23 np0005478302 kernel: scsi host4: ahci
Oct  9 05:00:23 np0005478302 kernel: scsi host5: ahci
Oct  9 05:00:23 np0005478302 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Oct  9 05:00:23 np0005478302 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Oct  9 05:00:23 np0005478302 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Oct  9 05:00:23 np0005478302 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Oct  9 05:00:23 np0005478302 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Oct  9 05:00:23 np0005478302 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Oct  9 05:00:23 np0005478302 systemd[1]: Reached target Initrd Root Device.
Oct  9 05:00:23 np0005478302 systemd[1]: Mounting Kernel Configuration File System...
Oct  9 05:00:23 np0005478302 systemd[1]: Mounted Kernel Configuration File System.
Oct  9 05:00:23 np0005478302 systemd[1]: Reached target System Initialization.
Oct  9 05:00:23 np0005478302 systemd[1]: Reached target Basic System.
Oct  9 05:00:23 np0005478302 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Oct  9 05:00:23 np0005478302 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  9 05:00:23 np0005478302 kernel: ata1.00: applying bridge limits
Oct  9 05:00:23 np0005478302 kernel: ata1.00: configured for UDMA/100
Oct  9 05:00:23 np0005478302 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  9 05:00:23 np0005478302 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:23 np0005478302 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:23 np0005478302 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:23 np0005478302 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:23 np0005478302 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:23 np0005478302 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  9 05:00:23 np0005478302 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  9 05:00:23 np0005478302 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  9 05:00:24 np0005478302 systemd[1]: Finished dracut initqueue hook.
Oct  9 05:00:24 np0005478302 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 05:00:24 np0005478302 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  9 05:00:24 np0005478302 systemd[1]: Reached target Remote File Systems.
Oct  9 05:00:24 np0005478302 systemd[1]: Starting dracut pre-mount hook...
Oct  9 05:00:24 np0005478302 systemd[1]: Finished dracut pre-mount hook.
Oct  9 05:00:24 np0005478302 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  9 05:00:24 np0005478302 systemd-fsck[527]: /usr/sbin/fsck.xfs: XFS file system.
Oct  9 05:00:24 np0005478302 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 05:00:24 np0005478302 systemd[1]: Mounting /sysroot...
Oct  9 05:00:24 np0005478302 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  9 05:00:24 np0005478302 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  9 05:00:24 np0005478302 kernel: XFS (vda1): Ending clean mount
Oct  9 05:00:24 np0005478302 systemd[1]: Mounted /sysroot.
Oct  9 05:00:24 np0005478302 systemd[1]: Reached target Initrd Root File System.
Oct  9 05:00:24 np0005478302 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  9 05:00:24 np0005478302 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  9 05:00:24 np0005478302 systemd[1]: Reached target Initrd File Systems.
Oct  9 05:00:24 np0005478302 systemd[1]: Reached target Initrd Default Target.
Oct  9 05:00:24 np0005478302 systemd[1]: Starting dracut mount hook...
Oct  9 05:00:24 np0005478302 systemd[1]: Finished dracut mount hook.
Oct  9 05:00:24 np0005478302 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  9 05:00:24 np0005478302 rpc.idmapd[420]: exiting on signal 15
Oct  9 05:00:24 np0005478302 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  9 05:00:24 np0005478302 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Network.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Timer Units.
Oct  9 05:00:24 np0005478302 systemd[1]: dbus.socket: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  9 05:00:24 np0005478302 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Initrd Default Target.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Basic System.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Initrd Root Device.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Initrd /usr File System.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Path Units.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Remote File Systems.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Slice Units.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Socket Units.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target System Initialization.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Local File Systems.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Swaps.
Oct  9 05:00:24 np0005478302 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped dracut mount hook.
Oct  9 05:00:24 np0005478302 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped dracut pre-mount hook.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  9 05:00:24 np0005478302 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  9 05:00:24 np0005478302 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped dracut initqueue hook.
Oct  9 05:00:24 np0005478302 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 05:00:24 np0005478302 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  9 05:00:24 np0005478302 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped Coldplug All udev Devices.
Oct  9 05:00:24 np0005478302 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped dracut pre-trigger hook.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  9 05:00:24 np0005478302 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped Setup Virtual Console.
Oct  9 05:00:24 np0005478302 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  9 05:00:24 np0005478302 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  9 05:00:24 np0005478302 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Closed udev Control Socket.
Oct  9 05:00:24 np0005478302 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Closed udev Kernel Socket.
Oct  9 05:00:24 np0005478302 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped dracut pre-udev hook.
Oct  9 05:00:24 np0005478302 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped dracut cmdline hook.
Oct  9 05:00:24 np0005478302 systemd[1]: Starting Cleanup udev Database...
Oct  9 05:00:24 np0005478302 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  9 05:00:24 np0005478302 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  9 05:00:24 np0005478302 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Stopped Create System Users.
Oct  9 05:00:24 np0005478302 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  9 05:00:24 np0005478302 systemd[1]: Finished Cleanup udev Database.
Oct  9 05:00:24 np0005478302 systemd[1]: Reached target Switch Root.
Oct  9 05:00:24 np0005478302 systemd[1]: Starting Switch Root...
Oct  9 05:00:24 np0005478302 systemd[1]: Switching root.
Oct  9 05:00:24 np0005478302 systemd-journald[283]: Received SIGTERM from PID 1 (systemd).
Oct  9 05:00:24 np0005478302 systemd-journald[283]: Journal stopped
Oct  9 05:00:25 np0005478302 kernel: audit: type=1404 audit(1760000424.775:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  9 05:00:25 np0005478302 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:00:25 np0005478302 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:00:25 np0005478302 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:00:25 np0005478302 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:00:25 np0005478302 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:00:25 np0005478302 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:00:25 np0005478302 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:00:25 np0005478302 kernel: audit: type=1403 audit(1760000424.891:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  9 05:00:25 np0005478302 systemd: Successfully loaded SELinux policy in 120.854ms.
Oct  9 05:00:25 np0005478302 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.278ms.
Oct  9 05:00:25 np0005478302 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 05:00:25 np0005478302 systemd: Detected virtualization kvm.
Oct  9 05:00:25 np0005478302 systemd: Detected architecture x86-64.
Oct  9 05:00:25 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:00:25 np0005478302 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  9 05:00:25 np0005478302 systemd: Stopped Switch Root.
Oct  9 05:00:25 np0005478302 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  9 05:00:25 np0005478302 systemd: Created slice Slice /system/getty.
Oct  9 05:00:25 np0005478302 systemd: Created slice Slice /system/serial-getty.
Oct  9 05:00:25 np0005478302 systemd: Created slice Slice /system/sshd-keygen.
Oct  9 05:00:25 np0005478302 systemd: Created slice User and Session Slice.
Oct  9 05:00:25 np0005478302 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 05:00:25 np0005478302 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  9 05:00:25 np0005478302 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  9 05:00:25 np0005478302 systemd: Reached target Local Encrypted Volumes.
Oct  9 05:00:25 np0005478302 systemd: Stopped target Switch Root.
Oct  9 05:00:25 np0005478302 systemd: Stopped target Initrd File Systems.
Oct  9 05:00:25 np0005478302 systemd: Stopped target Initrd Root File System.
Oct  9 05:00:25 np0005478302 systemd: Reached target Local Integrity Protected Volumes.
Oct  9 05:00:25 np0005478302 systemd: Reached target Path Units.
Oct  9 05:00:25 np0005478302 systemd: Reached target rpc_pipefs.target.
Oct  9 05:00:25 np0005478302 systemd: Reached target Slice Units.
Oct  9 05:00:25 np0005478302 systemd: Reached target Swaps.
Oct  9 05:00:25 np0005478302 systemd: Reached target Local Verity Protected Volumes.
Oct  9 05:00:25 np0005478302 systemd: Listening on RPCbind Server Activation Socket.
Oct  9 05:00:25 np0005478302 systemd: Reached target RPC Port Mapper.
Oct  9 05:00:25 np0005478302 systemd: Listening on Process Core Dump Socket.
Oct  9 05:00:25 np0005478302 systemd: Listening on initctl Compatibility Named Pipe.
Oct  9 05:00:25 np0005478302 systemd: Listening on udev Control Socket.
Oct  9 05:00:25 np0005478302 systemd: Listening on udev Kernel Socket.
Oct  9 05:00:25 np0005478302 systemd: Mounting Huge Pages File System...
Oct  9 05:00:25 np0005478302 systemd: Mounting POSIX Message Queue File System...
Oct  9 05:00:25 np0005478302 systemd: Mounting Kernel Debug File System...
Oct  9 05:00:25 np0005478302 systemd: Mounting Kernel Trace File System...
Oct  9 05:00:25 np0005478302 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 05:00:25 np0005478302 systemd: Starting Create List of Static Device Nodes...
Oct  9 05:00:25 np0005478302 systemd: Starting Load Kernel Module configfs...
Oct  9 05:00:25 np0005478302 systemd: Starting Load Kernel Module drm...
Oct  9 05:00:25 np0005478302 systemd: Starting Load Kernel Module efi_pstore...
Oct  9 05:00:25 np0005478302 systemd: Starting Load Kernel Module fuse...
Oct  9 05:00:25 np0005478302 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  9 05:00:25 np0005478302 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  9 05:00:25 np0005478302 systemd: Stopped File System Check on Root Device.
Oct  9 05:00:25 np0005478302 systemd: Stopped Journal Service.
Oct  9 05:00:25 np0005478302 systemd: Starting Journal Service...
Oct  9 05:00:25 np0005478302 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 05:00:25 np0005478302 kernel: fuse: init (API version 7.37)
Oct  9 05:00:25 np0005478302 systemd: Starting Generate network units from Kernel command line...
Oct  9 05:00:25 np0005478302 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 05:00:25 np0005478302 systemd: Starting Remount Root and Kernel File Systems...
Oct  9 05:00:25 np0005478302 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  9 05:00:25 np0005478302 systemd: Starting Apply Kernel Variables...
Oct  9 05:00:25 np0005478302 systemd: Starting Coldplug All udev Devices...
Oct  9 05:00:25 np0005478302 systemd: Mounted Huge Pages File System.
Oct  9 05:00:25 np0005478302 systemd: Mounted POSIX Message Queue File System.
Oct  9 05:00:25 np0005478302 systemd: Mounted Kernel Debug File System.
Oct  9 05:00:25 np0005478302 systemd: Mounted Kernel Trace File System.
Oct  9 05:00:25 np0005478302 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  9 05:00:25 np0005478302 systemd: Finished Create List of Static Device Nodes.
Oct  9 05:00:25 np0005478302 kernel: ACPI: bus type drm_connector registered
Oct  9 05:00:25 np0005478302 systemd: modprobe@configfs.service: Deactivated successfully.
Oct  9 05:00:25 np0005478302 systemd: Finished Load Kernel Module configfs.
Oct  9 05:00:25 np0005478302 systemd-journald[650]: Journal started
Oct  9 05:00:25 np0005478302 systemd-journald[650]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:00:25 np0005478302 systemd[1]: Queued start job for default target Multi-User System.
Oct  9 05:00:25 np0005478302 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  9 05:00:25 np0005478302 systemd: Started Journal Service.
Oct  9 05:00:25 np0005478302 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Load Kernel Module drm.
Oct  9 05:00:25 np0005478302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  9 05:00:25 np0005478302 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Load Kernel Module fuse.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Generate network units from Kernel command line.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  9 05:00:25 np0005478302 systemd[1]: Mounting FUSE Control File System...
Oct  9 05:00:25 np0005478302 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Rebuild Hardware Database...
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  9 05:00:25 np0005478302 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  9 05:00:25 np0005478302 systemd-journald[650]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:00:25 np0005478302 systemd-journald[650]: Received client request to flush runtime journal.
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Load/Save OS Random Seed...
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Create System Users...
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Apply Kernel Variables.
Oct  9 05:00:25 np0005478302 systemd[1]: Mounted FUSE Control File System.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Load/Save OS Random Seed.
Oct  9 05:00:25 np0005478302 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Create System Users.
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 05:00:25 np0005478302 systemd[1]: Reached target Preparation for Local File Systems.
Oct  9 05:00:25 np0005478302 systemd[1]: Reached target Local File Systems.
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  9 05:00:25 np0005478302 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  9 05:00:25 np0005478302 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  9 05:00:25 np0005478302 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Automatic Boot Loader Update...
Oct  9 05:00:25 np0005478302 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 05:00:25 np0005478302 bootctl[667]: Couldn't find EFI system partition, skipping.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Automatic Boot Loader Update.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Security Auditing Service...
Oct  9 05:00:25 np0005478302 systemd[1]: Starting RPC Bind...
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Rebuild Journal Catalog...
Oct  9 05:00:25 np0005478302 auditd[673]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  9 05:00:25 np0005478302 auditd[673]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Rebuild Journal Catalog.
Oct  9 05:00:25 np0005478302 systemd[1]: Started RPC Bind.
Oct  9 05:00:25 np0005478302 augenrules[678]: /sbin/augenrules: No change
Oct  9 05:00:25 np0005478302 augenrules[693]: No rules
Oct  9 05:00:25 np0005478302 augenrules[693]: enabled 1
Oct  9 05:00:25 np0005478302 augenrules[693]: failure 1
Oct  9 05:00:25 np0005478302 augenrules[693]: pid 673
Oct  9 05:00:25 np0005478302 augenrules[693]: rate_limit 0
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog_limit 8192
Oct  9 05:00:25 np0005478302 augenrules[693]: lost 0
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog 4
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog_wait_time 60000
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog_wait_time_actual 0
Oct  9 05:00:25 np0005478302 augenrules[693]: enabled 1
Oct  9 05:00:25 np0005478302 augenrules[693]: failure 1
Oct  9 05:00:25 np0005478302 augenrules[693]: pid 673
Oct  9 05:00:25 np0005478302 augenrules[693]: rate_limit 0
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog_limit 8192
Oct  9 05:00:25 np0005478302 augenrules[693]: lost 0
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog 4
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog_wait_time 60000
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog_wait_time_actual 0
Oct  9 05:00:25 np0005478302 augenrules[693]: enabled 1
Oct  9 05:00:25 np0005478302 augenrules[693]: failure 1
Oct  9 05:00:25 np0005478302 augenrules[693]: pid 673
Oct  9 05:00:25 np0005478302 augenrules[693]: rate_limit 0
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog_limit 8192
Oct  9 05:00:25 np0005478302 augenrules[693]: lost 0
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog 4
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog_wait_time 60000
Oct  9 05:00:25 np0005478302 augenrules[693]: backlog_wait_time_actual 0
Oct  9 05:00:25 np0005478302 systemd[1]: Started Security Auditing Service.
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Rebuild Hardware Database.
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Update is Completed...
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Update is Completed.
Oct  9 05:00:25 np0005478302 systemd-udevd[701]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 05:00:25 np0005478302 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 05:00:25 np0005478302 systemd[1]: Reached target System Initialization.
Oct  9 05:00:25 np0005478302 systemd[1]: Started dnf makecache --timer.
Oct  9 05:00:25 np0005478302 systemd[1]: Started Daily rotation of log files.
Oct  9 05:00:25 np0005478302 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  9 05:00:25 np0005478302 systemd[1]: Reached target Timer Units.
Oct  9 05:00:25 np0005478302 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  9 05:00:25 np0005478302 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  9 05:00:25 np0005478302 systemd[1]: Reached target Socket Units.
Oct  9 05:00:25 np0005478302 systemd[1]: Starting D-Bus System Message Bus...
Oct  9 05:00:25 np0005478302 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 05:00:25 np0005478302 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 05:00:25 np0005478302 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 05:00:25 np0005478302 systemd-udevd[711]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:00:25 np0005478302 systemd[1]: Started D-Bus System Message Bus.
Oct  9 05:00:25 np0005478302 systemd[1]: Reached target Basic System.
Oct  9 05:00:25 np0005478302 dbus-broker-lau[722]: Ready
Oct  9 05:00:25 np0005478302 systemd[1]: Starting NTP client/server...
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  9 05:00:25 np0005478302 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  9 05:00:25 np0005478302 systemd[1]: Starting IPv4 firewall with iptables...
Oct  9 05:00:25 np0005478302 systemd[1]: Started irqbalance daemon.
Oct  9 05:00:25 np0005478302 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  9 05:00:25 np0005478302 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 05:00:25 np0005478302 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 05:00:25 np0005478302 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 05:00:25 np0005478302 systemd[1]: Reached target sshd-keygen.target.
Oct  9 05:00:25 np0005478302 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  9 05:00:25 np0005478302 systemd[1]: Reached target User and Group Name Lookups.
Oct  9 05:00:25 np0005478302 systemd[1]: Starting User Login Management...
Oct  9 05:00:25 np0005478302 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  9 05:00:25 np0005478302 chronyd[752]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 05:00:25 np0005478302 chronyd[752]: Loaded 0 symmetric keys
Oct  9 05:00:25 np0005478302 chronyd[752]: Using right/UTC timezone to obtain leap second data
Oct  9 05:00:25 np0005478302 chronyd[752]: Loaded seccomp filter (level 2)
Oct  9 05:00:25 np0005478302 systemd[1]: Started NTP client/server.
Oct  9 05:00:25 np0005478302 systemd-logind[745]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 05:00:25 np0005478302 systemd-logind[745]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 05:00:25 np0005478302 systemd-logind[745]: New seat seat0.
Oct  9 05:00:25 np0005478302 systemd[1]: Started User Login Management.
Oct  9 05:00:25 np0005478302 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  9 05:00:25 np0005478302 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  9 05:00:25 np0005478302 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Oct  9 05:00:25 np0005478302 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Oct  9 05:00:25 np0005478302 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  9 05:00:25 np0005478302 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  9 05:00:25 np0005478302 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  9 05:00:26 np0005478302 iptables.init[739]: iptables: Applying firewall rules: [  OK  ]
Oct  9 05:00:26 np0005478302 systemd[1]: Finished IPv4 firewall with iptables.
Oct  9 05:00:26 np0005478302 kernel: iTCO_vendor_support: vendor-support=0
Oct  9 05:00:26 np0005478302 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Oct  9 05:00:26 np0005478302 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Oct  9 05:00:26 np0005478302 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Oct  9 05:00:26 np0005478302 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Oct  9 05:00:26 np0005478302 kernel: Console: switching to colour dummy device 80x25
Oct  9 05:00:26 np0005478302 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  9 05:00:26 np0005478302 kernel: [drm] features: -context_init
Oct  9 05:00:26 np0005478302 kernel: [drm] number of scanouts: 1
Oct  9 05:00:26 np0005478302 kernel: [drm] number of cap sets: 0
Oct  9 05:00:26 np0005478302 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Oct  9 05:00:26 np0005478302 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  9 05:00:26 np0005478302 kernel: Console: switching to colour frame buffer device 160x50
Oct  9 05:00:26 np0005478302 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  9 05:00:26 np0005478302 kernel: kvm_amd: TSC scaling supported
Oct  9 05:00:26 np0005478302 kernel: kvm_amd: Nested Virtualization enabled
Oct  9 05:00:26 np0005478302 kernel: kvm_amd: Nested Paging enabled
Oct  9 05:00:26 np0005478302 kernel: kvm_amd: LBR virtualization supported
Oct  9 05:00:26 np0005478302 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Oct  9 05:00:26 np0005478302 kernel: kvm_amd: Virtual GIF supported
Oct  9 05:00:26 np0005478302 cloud-init[793]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 09:00:26 +0000. Up 4.82 seconds.
Oct  9 05:00:26 np0005478302 systemd[1]: run-cloud\x2dinit-tmp-tmpptky80uq.mount: Deactivated successfully.
Oct  9 05:00:26 np0005478302 systemd[1]: Starting Hostname Service...
Oct  9 05:00:26 np0005478302 systemd[1]: Started Hostname Service.
Oct  9 05:00:26 np0005478302 systemd-hostnamed[807]: Hostname set to <np0005478302> (static)
Oct  9 05:00:26 np0005478302 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  9 05:00:26 np0005478302 systemd[1]: Reached target Preparation for Network.
Oct  9 05:00:26 np0005478302 systemd[1]: Starting Network Manager...
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.7764] NetworkManager (version 1.54.1-1.el9) is starting... (boot:7c020c8f-ae8f-497c-a51c-02a263af6717)
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.7767] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.7850] manager[0x55ba49629080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.7882] hostname: hostname: using hostnamed
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.7883] hostname: static hostname changed from (none) to "np0005478302"
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.7885] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.7970] manager[0x55ba49629080]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.7970] manager[0x55ba49629080]: rfkill: WWAN hardware radio set enabled
Oct  9 05:00:26 np0005478302 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8043] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8043] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8044] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8045] manager: Networking is enabled by state file
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8047] settings: Loaded settings plugin: keyfile (internal)
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8071] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8098] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8116] dhcp: init: Using DHCP client 'internal'
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8118] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8134] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8144] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8154] device (lo): Activation: starting connection 'lo' (42cdbca8-f689-4fdd-9617-072161b4803e)
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8164] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8169] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:00:26 np0005478302 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:00:26 np0005478302 systemd[1]: Started Network Manager.
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8215] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8219] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8222] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8224] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 05:00:26 np0005478302 systemd[1]: Reached target Network.
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8228] device (eth0): carrier: link connected
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8232] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8237] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8243] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8248] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8249] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8252] manager: NetworkManager state is now CONNECTING
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8253] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8259] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8264] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:00:26 np0005478302 systemd[1]: Starting Network Manager Wait Online...
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8270] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Oct  9 05:00:26 np0005478302 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8298] dhcp4 (eth0): state changed new lease, address=192.168.26.64
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8305] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 05:00:26 np0005478302 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8444] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 05:00:26 np0005478302 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  9 05:00:26 np0005478302 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 05:00:26 np0005478302 systemd[1]: Reached target NFS client services.
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8462] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 05:00:26 np0005478302 NetworkManager[811]: <info>  [1760000426.8465] device (lo): Activation: successful, device activated.
Oct  9 05:00:26 np0005478302 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 05:00:26 np0005478302 systemd[1]: Reached target Remote File Systems.
Oct  9 05:00:26 np0005478302 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 05:00:27 np0005478302 NetworkManager[811]: <info>  [1760000427.9616] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:00:29 np0005478302 NetworkManager[811]: <info>  [1760000429.0500] dhcp6 (eth0): state changed new lease, address=2001:db8::186
Oct  9 05:00:30 np0005478302 NetworkManager[811]: <info>  [1760000430.7144] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:00:30 np0005478302 NetworkManager[811]: <info>  [1760000430.7169] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:00:30 np0005478302 NetworkManager[811]: <info>  [1760000430.7170] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:00:30 np0005478302 NetworkManager[811]: <info>  [1760000430.7173] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:00:30 np0005478302 NetworkManager[811]: <info>  [1760000430.7175] device (eth0): Activation: successful, device activated.
Oct  9 05:00:30 np0005478302 NetworkManager[811]: <info>  [1760000430.7179] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 05:00:30 np0005478302 NetworkManager[811]: <info>  [1760000430.7182] manager: startup complete
Oct  9 05:00:30 np0005478302 systemd[1]: Finished Network Manager Wait Online.
Oct  9 05:00:30 np0005478302 systemd[1]: Starting Cloud-init: Network Stage...
Oct  9 05:00:30 np0005478302 cloud-init[877]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 09:00:30 +0000. Up 9.40 seconds.
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: |  eth0  | True |        192.168.26.64         | 255.255.255.0 | global | fa:16:3e:77:91:b1 |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: |  eth0  | True |      2001:db8::186/128       |       .       | global | fa:16:3e:77:91:b1 |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: |  eth0  | True | fe80::f816:3eff:fe77:91b1/64 |       .       |  link  | fa:16:3e:77:91:b1 |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: |   2   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: ++++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: +-------+---------------+-------------+-----------+-------+
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: | Route |  Destination  |   Gateway   | Interface | Flags |
Oct  9 05:00:30 np0005478302 cloud-init[877]: ci-info: +-------+---------------+-------------+-----------+-------+
Oct  9 05:00:31 np0005478302 cloud-init[877]: ci-info: |   1   |  2001:db8::1  |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478302 cloud-init[877]: ci-info: |   2   | 2001:db8::186 |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478302 cloud-init[877]: ci-info: |   3   |   fe80::/64   |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478302 cloud-init[877]: ci-info: |   4   |      ::/0     | 2001:db8::1 |    eth0   |   UG  |
Oct  9 05:00:31 np0005478302 cloud-init[877]: ci-info: |   6   |     local     |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478302 cloud-init[877]: ci-info: |   7   |     local     |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478302 cloud-init[877]: ci-info: |   8   |   multicast   |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478302 cloud-init[877]: ci-info: +-------+---------------+-------------+-----------+-------+
Oct  9 05:00:31 np0005478302 chronyd[752]: Selected source 141.11.234.198 (2.centos.pool.ntp.org)
Oct  9 05:00:31 np0005478302 chronyd[752]: System clock TAI offset set to 37 seconds
Oct  9 05:00:31 np0005478302 cloud-init[877]: Generating public/private rsa key pair.
Oct  9 05:00:31 np0005478302 cloud-init[877]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  9 05:00:31 np0005478302 cloud-init[877]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  9 05:00:31 np0005478302 cloud-init[877]: The key fingerprint is:
Oct  9 05:00:31 np0005478302 cloud-init[877]: SHA256:SjSL87cDzENWyuZUyiChhRvvwxEze0SI7P1fok53LsE root@np0005478302
Oct  9 05:00:31 np0005478302 cloud-init[877]: The key's randomart image is:
Oct  9 05:00:31 np0005478302 cloud-init[877]: +---[RSA 3072]----+
Oct  9 05:00:31 np0005478302 cloud-init[877]: |.oo+.            |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |++* o   o        |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |o+.B +o=         |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |..+..oXo         |
Oct  9 05:00:31 np0005478302 cloud-init[877]: | o o+O+ S        |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |  +  =*E .       |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |   . .=+=.       |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |    ...++.       |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |    ..  oo       |
Oct  9 05:00:31 np0005478302 cloud-init[877]: +----[SHA256]-----+
Oct  9 05:00:31 np0005478302 cloud-init[877]: Generating public/private ecdsa key pair.
Oct  9 05:00:31 np0005478302 cloud-init[877]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  9 05:00:31 np0005478302 cloud-init[877]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  9 05:00:31 np0005478302 cloud-init[877]: The key fingerprint is:
Oct  9 05:00:31 np0005478302 cloud-init[877]: SHA256:3doyCiJgIvzOLn2ul0nS6dfKqc7OWaQZicS6VbTf3VA root@np0005478302
Oct  9 05:00:31 np0005478302 cloud-init[877]: The key's randomart image is:
Oct  9 05:00:31 np0005478302 cloud-init[877]: +---[ECDSA 256]---+
Oct  9 05:00:31 np0005478302 cloud-init[877]: |     .       E   |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |  . . .     .    |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |   o o     .     |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |. o o o ....o    |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |++ o.o.oS.....   |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |+.+. +=    o     |
Oct  9 05:00:31 np0005478302 cloud-init[877]: | .o.=o+.. + .    |
Oct  9 05:00:31 np0005478302 cloud-init[877]: | .oooB+o.o o     |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |  o=BOo+o        |
Oct  9 05:00:31 np0005478302 cloud-init[877]: +----[SHA256]-----+
Oct  9 05:00:31 np0005478302 cloud-init[877]: Generating public/private ed25519 key pair.
Oct  9 05:00:31 np0005478302 cloud-init[877]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  9 05:00:31 np0005478302 cloud-init[877]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  9 05:00:31 np0005478302 cloud-init[877]: The key fingerprint is:
Oct  9 05:00:31 np0005478302 cloud-init[877]: SHA256:ZLutXbucvZ2LLWKjiNG4Gu8/bJpZQf/MkgvvWzFz2kg root@np0005478302
Oct  9 05:00:31 np0005478302 cloud-init[877]: The key's randomart image is:
Oct  9 05:00:31 np0005478302 cloud-init[877]: +--[ED25519 256]--+
Oct  9 05:00:31 np0005478302 cloud-init[877]: |                 |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |                 |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |       .o        |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |      .o..       |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |       .S. E .   |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |      o .o* O    |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |   . o.+.o.B..   |
Oct  9 05:00:31 np0005478302 cloud-init[877]: |    o B++o+* =o..|
Oct  9 05:00:31 np0005478302 cloud-init[877]: |   .oO+o+*= Bo+=o|
Oct  9 05:00:31 np0005478302 cloud-init[877]: +----[SHA256]-----+
Oct  9 05:00:31 np0005478302 systemd[1]: Finished Cloud-init: Network Stage.
Oct  9 05:00:31 np0005478302 systemd[1]: Reached target Cloud-config availability.
Oct  9 05:00:31 np0005478302 systemd[1]: Reached target Network is Online.
Oct  9 05:00:31 np0005478302 systemd[1]: Starting Cloud-init: Config Stage...
Oct  9 05:00:31 np0005478302 systemd[1]: Starting Notify NFS peers of a restart...
Oct  9 05:00:31 np0005478302 systemd[1]: Starting System Logging Service...
Oct  9 05:00:31 np0005478302 systemd[1]: Starting OpenSSH server daemon...
Oct  9 05:00:31 np0005478302 sm-notify[960]: Version 2.5.4 starting
Oct  9 05:00:31 np0005478302 systemd[1]: Starting Permit User Sessions...
Oct  9 05:00:31 np0005478302 systemd[1]: Started Notify NFS peers of a restart.
Oct  9 05:00:31 np0005478302 systemd[1]: Started OpenSSH server daemon.
Oct  9 05:00:31 np0005478302 systemd[1]: Finished Permit User Sessions.
Oct  9 05:00:31 np0005478302 systemd[1]: Started Command Scheduler.
Oct  9 05:00:31 np0005478302 rsyslogd[961]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="961" x-info="https://www.rsyslog.com"] start
Oct  9 05:00:31 np0005478302 systemd[1]: Started Getty on tty1.
Oct  9 05:00:31 np0005478302 rsyslogd[961]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  9 05:00:31 np0005478302 systemd[1]: Started Serial Getty on ttyS0.
Oct  9 05:00:31 np0005478302 systemd[1]: Reached target Login Prompts.
Oct  9 05:00:31 np0005478302 systemd[1]: Started System Logging Service.
Oct  9 05:00:31 np0005478302 systemd[1]: Reached target Multi-User System.
Oct  9 05:00:31 np0005478302 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  9 05:00:32 np0005478302 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  9 05:00:32 np0005478302 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  9 05:00:32 np0005478302 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 05:00:32 np0005478302 cloud-init[974]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 09:00:32 +0000. Up 10.63 seconds.
Oct  9 05:00:32 np0005478302 systemd[1]: Finished Cloud-init: Config Stage.
Oct  9 05:00:32 np0005478302 systemd[1]: Starting Cloud-init: Final Stage...
Oct  9 05:00:32 np0005478302 cloud-init[978]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 09:00:32 +0000. Up 10.95 seconds.
Oct  9 05:00:32 np0005478302 cloud-init[980]: #############################################################
Oct  9 05:00:32 np0005478302 cloud-init[981]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  9 05:00:32 np0005478302 cloud-init[983]: 256 SHA256:3doyCiJgIvzOLn2ul0nS6dfKqc7OWaQZicS6VbTf3VA root@np0005478302 (ECDSA)
Oct  9 05:00:32 np0005478302 cloud-init[985]: 256 SHA256:ZLutXbucvZ2LLWKjiNG4Gu8/bJpZQf/MkgvvWzFz2kg root@np0005478302 (ED25519)
Oct  9 05:00:32 np0005478302 cloud-init[987]: 3072 SHA256:SjSL87cDzENWyuZUyiChhRvvwxEze0SI7P1fok53LsE root@np0005478302 (RSA)
Oct  9 05:00:32 np0005478302 cloud-init[988]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  9 05:00:32 np0005478302 cloud-init[989]: #############################################################
Oct  9 05:00:32 np0005478302 cloud-init[978]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 09:00:32 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.08 seconds
Oct  9 05:00:32 np0005478302 systemd[1]: Finished Cloud-init: Final Stage.
Oct  9 05:00:32 np0005478302 systemd[1]: Reached target Cloud-init target.
Oct  9 05:00:32 np0005478302 systemd[1]: Startup finished in 1.228s (kernel) + 1.993s (initrd) + 7.908s (userspace) = 11.130s.
Oct  9 05:00:36 np0005478302 irqbalance[740]: Cannot change IRQ 45 affinity: Operation not permitted
Oct  9 05:00:36 np0005478302 irqbalance[740]: IRQ 45 affinity is now unmanaged
Oct  9 05:00:36 np0005478302 irqbalance[740]: Cannot change IRQ 44 affinity: Operation not permitted
Oct  9 05:00:36 np0005478302 irqbalance[740]: IRQ 44 affinity is now unmanaged
Oct  9 05:00:36 np0005478302 irqbalance[740]: Cannot change IRQ 42 affinity: Operation not permitted
Oct  9 05:00:36 np0005478302 irqbalance[740]: IRQ 42 affinity is now unmanaged
Oct  9 05:00:40 np0005478302 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:00:56 np0005478302 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:01:02 np0005478302 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 05:01:02 np0005478302 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 05:01:02 np0005478302 systemd-logind[745]: New session 1 of user zuul.
Oct  9 05:01:02 np0005478302 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 05:01:02 np0005478302 systemd[1]: Starting User Manager for UID 1000...
Oct  9 05:01:02 np0005478302 systemd[1032]: Queued start job for default target Main User Target.
Oct  9 05:01:02 np0005478302 systemd[1032]: Created slice User Application Slice.
Oct  9 05:01:02 np0005478302 systemd[1032]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 05:01:02 np0005478302 systemd[1032]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 05:01:02 np0005478302 systemd[1032]: Reached target Paths.
Oct  9 05:01:02 np0005478302 systemd[1032]: Reached target Timers.
Oct  9 05:01:02 np0005478302 systemd[1032]: Starting D-Bus User Message Bus Socket...
Oct  9 05:01:02 np0005478302 systemd[1032]: Starting Create User's Volatile Files and Directories...
Oct  9 05:01:02 np0005478302 systemd[1032]: Listening on D-Bus User Message Bus Socket.
Oct  9 05:01:02 np0005478302 systemd[1032]: Reached target Sockets.
Oct  9 05:01:02 np0005478302 systemd[1032]: Finished Create User's Volatile Files and Directories.
Oct  9 05:01:02 np0005478302 systemd[1032]: Reached target Basic System.
Oct  9 05:01:02 np0005478302 systemd[1032]: Reached target Main User Target.
Oct  9 05:01:02 np0005478302 systemd[1032]: Startup finished in 80ms.
Oct  9 05:01:02 np0005478302 systemd[1]: Started User Manager for UID 1000.
Oct  9 05:01:02 np0005478302 systemd[1]: Started Session 1 of User zuul.
Oct  9 05:01:02 np0005478302 python3[1114]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:04 np0005478302 python3[1142]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:10 np0005478302 python3[1196]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:11 np0005478302 python3[1236]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  9 05:01:13 np0005478302 python3[1262]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJHvXKF+OC4TiCL/aa/o6rq9+SFP7bwIAGJR40fwDShswdP6EsCB3q74rxa7HZk7nAlq9GsqcvEMBnmYvXZUScuzDatbNHHj3L31gOIlnhwqJ+iI2XdTfBbmIf8ccHDrx1xB3Hr6l9Q5eqR06BX9lfG4zf0ZMnKgwxfT7bXERv1O989RrexR2EoG/yjbB1iGKYDIvULj9yB/Lzd91Yva830/7KuOe3mZkeUMPkp7g4dMGF7POukU3bb+UgETc+cweFS+cE2oeZeFxj6d6jKBDkpWNKLJcng32oQUvkUbS53tMgPVCo75ZmBtWas4DZeuhJOIo5dD1eFlOVaBAP+38K/N68/C4UkR/HKomLSssPXAmV6MLWoDu9thuzfr8bgmyZT4hnBveyALdASAffBpfuv8R/2Z6K/F7FIDgew4RyZcKyQjOvsxPqfI+6+Jq4hxxOiGGLQmKsHF+T/crR7fIS8NKaqRy/QwezRy5WD56EvUh4/y9u3fKQK8uVbRdYHb0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:13 np0005478302 python3[1286]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:13 np0005478302 python3[1385]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:14 np0005478302 python3[1456]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760000473.6792326-251-225298872049380/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=a671a8077fb34b76835f3572668f1b22_id_rsa follow=False checksum=c7f5caef86df45fcb47abb858beda9b774bf09c9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:14 np0005478302 python3[1579]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:14 np0005478302 python3[1650]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760000474.2899761-306-161560103711317/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=a671a8077fb34b76835f3572668f1b22_id_rsa.pub follow=False checksum=81cf534faaee7eab1d192c4cf78a7f0119953204 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:15 np0005478302 python3[1698]: ansible-ping Invoked with data=pong
Oct  9 05:01:16 np0005478302 python3[1722]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:18 np0005478302 python3[1776]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  9 05:01:19 np0005478302 python3[1808]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478302 python3[1832]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478302 python3[1856]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478302 python3[1880]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478302 python3[1904]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:20 np0005478302 python3[1928]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:21 np0005478302 python3[1954]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:21 np0005478302 python3[2032]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:22 np0005478302 python3[2105]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760000481.5883589-31-275146251199624/source follow=False _original_basename=mirror_info.sh.j2 checksum=3f92644b791816833989d215b9a84c589a7b8ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:22 np0005478302 python3[2153]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:22 np0005478302 python3[2177]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478302 python3[2201]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478302 python3[2225]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478302 python3[2249]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478302 python3[2273]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478302 python3[2297]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478302 python3[2321]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478302 python3[2345]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478302 python3[2369]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478302 python3[2393]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478302 python3[2417]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478302 python3[2441]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478302 python3[2465]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478302 python3[2489]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478302 python3[2513]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478302 python3[2537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478302 python3[2561]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478302 python3[2585]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478302 python3[2609]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478302 python3[2633]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478302 python3[2657]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478302 python3[2681]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478302 python3[2705]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478302 python3[2729]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478302 python3[2753]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:30 np0005478302 python3[2779]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  9 05:01:30 np0005478302 systemd[1]: Starting Time & Date Service...
Oct  9 05:01:30 np0005478302 systemd[1]: Started Time & Date Service.
Oct  9 05:01:30 np0005478302 systemd-timedated[2781]: Changed time zone to 'UTC' (UTC).
Oct  9 05:01:30 np0005478302 python3[2810]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:30 np0005478302 python3[2886]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:31 np0005478302 python3[2957]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760000490.6445513-251-77287995744527/source _original_basename=tmp2y5hluyf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:31 np0005478302 python3[3057]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:31 np0005478302 python3[3128]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760000491.2253106-301-136897202375509/source _original_basename=tmp6orm3r9_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:32 np0005478302 python3[3230]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:32 np0005478302 python3[3303]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760000492.1029363-381-237002997104767/source _original_basename=tmp1qe65ss4 follow=False checksum=01a1e3f52b61fe8f6668043389a1662e223f45ce backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:32 np0005478302 python3[3351]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:01:33 np0005478302 python3[3377]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:01:33 np0005478302 python3[3457]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:33 np0005478302 python3[3530]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760000493.2622182-451-258847646094694/source _original_basename=tmp3uckk_k3 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:34 np0005478302 python3[3581]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e08-49e2-22a3-075b-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:01:34 np0005478302 python3[3609]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e08-49e2-22a3-075b-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  9 05:01:36 np0005478302 python3[3637]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:36 np0005478302 irqbalance[740]: Cannot change IRQ 43 affinity: Operation not permitted
Oct  9 05:01:36 np0005478302 irqbalance[740]: IRQ 43 affinity is now unmanaged
Oct  9 05:01:38 np0005478302 chronyd[752]: Selected source 46.37.96.107 (2.centos.pool.ntp.org)
Oct  9 05:01:51 np0005478302 python3[3663]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:02:00 np0005478302 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 05:02:20 np0005478302 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 05:02:20 np0005478302 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  9 05:02:20 np0005478302 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  9 05:02:20 np0005478302 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Oct  9 05:02:20 np0005478302 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Oct  9 05:02:20 np0005478302 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Oct  9 05:02:20 np0005478302 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Oct  9 05:02:20 np0005478302 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6626] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 05:02:20 np0005478302 systemd-udevd[3666]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6845] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6862] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6864] device (eth1): carrier: link connected
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6865] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6869] policy: auto-activating connection 'Wired connection 1' (91b0cf9d-52d5-3c28-aeb9-d8a6541bcd9c)
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6872] device (eth1): Activation: starting connection 'Wired connection 1' (91b0cf9d-52d5-3c28-aeb9-d8a6541bcd9c)
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6873] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6875] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6877] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:02:20 np0005478302 NetworkManager[811]: <info>  [1760000540.6881] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:21 np0005478302 python3[3693]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e08-49e2-3fb7-b15f-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:02:30 np0005478302 python3[3773]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:02:31 np0005478302 python3[3846]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760000550.6845038-113-273536253173200/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=7a9b0e8a3c06346945d146f316ec280871f4cf6d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:02:31 np0005478302 python3[3896]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:02:31 np0005478302 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  9 05:02:31 np0005478302 systemd[1]: Stopped Network Manager Wait Online.
Oct  9 05:02:31 np0005478302 systemd[1]: Stopping Network Manager Wait Online...
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7629] caught SIGTERM, shutting down normally.
Oct  9 05:02:31 np0005478302 systemd[1]: Stopping Network Manager...
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7635] dhcp4 (eth0): canceled DHCP transaction
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7635] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7635] dhcp4 (eth0): state changed no lease
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7636] dhcp6 (eth0): canceled DHCP transaction
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7636] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7636] dhcp6 (eth0): state changed no lease
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7638] manager: NetworkManager state is now CONNECTING
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7701] dhcp4 (eth1): canceled DHCP transaction
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7701] dhcp4 (eth1): state changed no lease
Oct  9 05:02:31 np0005478302 NetworkManager[811]: <info>  [1760000551.7719] exiting (success)
Oct  9 05:02:31 np0005478302 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:02:31 np0005478302 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:02:31 np0005478302 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  9 05:02:31 np0005478302 systemd[1]: Stopped Network Manager.
Oct  9 05:02:31 np0005478302 systemd[1]: Starting Network Manager...
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8200] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:7c020c8f-ae8f-497c-a51c-02a263af6717)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8201] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8245] manager[0x56194cd6c090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 05:02:31 np0005478302 systemd[1]: Starting Hostname Service...
Oct  9 05:02:31 np0005478302 systemd[1]: Started Hostname Service.
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8825] hostname: hostname: using hostnamed
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8826] hostname: static hostname changed from (none) to "np0005478302"
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8829] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8831] manager[0x56194cd6c090]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8832] manager[0x56194cd6c090]: rfkill: WWAN hardware radio set enabled
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8851] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8851] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8852] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8852] manager: Networking is enabled by state file
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8854] settings: Loaded settings plugin: keyfile (internal)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8857] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8875] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8882] dhcp: init: Using DHCP client 'internal'
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8884] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8887] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8891] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8895] device (lo): Activation: starting connection 'lo' (42cdbca8-f689-4fdd-9617-072161b4803e)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8900] device (eth0): carrier: link connected
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8903] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8907] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8907] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8911] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8915] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8919] device (eth1): carrier: link connected
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8923] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8926] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (91b0cf9d-52d5-3c28-aeb9-d8a6541bcd9c) (indicated)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8926] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8931] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8935] device (eth1): Activation: starting connection 'Wired connection 1' (91b0cf9d-52d5-3c28-aeb9-d8a6541bcd9c)
Oct  9 05:02:31 np0005478302 systemd[1]: Started Network Manager.
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8948] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8952] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8953] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8954] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8955] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8956] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8957] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8959] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8960] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8965] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8967] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8968] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8970] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8976] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8980] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8988] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8990] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.8994] device (lo): Activation: successful, device activated.
Oct  9 05:02:31 np0005478302 systemd[1]: Starting Network Manager Wait Online...
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.9005] dhcp4 (eth0): state changed new lease, address=192.168.26.64
Oct  9 05:02:31 np0005478302 NetworkManager[3909]: <info>  [1760000551.9009] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 05:02:32 np0005478302 python3[3968]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e08-49e2-3fb7-b15f-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:02:32 np0005478302 NetworkManager[3909]: <info>  [1760000552.9150] dhcp6 (eth0): state changed new lease, address=2001:db8::186
Oct  9 05:02:32 np0005478302 NetworkManager[3909]: <info>  [1760000552.9161] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:02:32 np0005478302 NetworkManager[3909]: <info>  [1760000552.9182] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:02:32 np0005478302 NetworkManager[3909]: <info>  [1760000552.9184] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:02:32 np0005478302 NetworkManager[3909]: <info>  [1760000552.9188] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:02:32 np0005478302 NetworkManager[3909]: <info>  [1760000552.9196] device (eth0): Activation: successful, device activated.
Oct  9 05:02:32 np0005478302 NetworkManager[3909]: <info>  [1760000552.9204] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 05:02:42 np0005478302 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:03:01 np0005478302 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5501] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:03:17 np0005478302 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:03:17 np0005478302 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5711] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5713] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5720] device (eth1): Activation: successful, device activated.
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5725] manager: startup complete
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5726] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <warn>  [1760000597.5732] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  9 05:03:17 np0005478302 systemd[1]: Finished Network Manager Wait Online.
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5751] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5840] dhcp4 (eth1): canceled DHCP transaction
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5840] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5841] dhcp4 (eth1): state changed no lease
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5849] policy: auto-activating connection 'ci-private-network' (99381071-70a1-5f50-b83c-41d249156268)
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5852] device (eth1): Activation: starting connection 'ci-private-network' (99381071-70a1-5f50-b83c-41d249156268)
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5853] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5854] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5858] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5863] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5890] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5891] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:03:17 np0005478302 NetworkManager[3909]: <info>  [1760000597.5895] device (eth1): Activation: successful, device activated.
Oct  9 05:03:24 np0005478302 systemd[1032]: Starting Mark boot as successful...
Oct  9 05:03:24 np0005478302 systemd[1032]: Finished Mark boot as successful.
Oct  9 05:03:27 np0005478302 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:03:32 np0005478302 systemd-logind[745]: Session 1 logged out. Waiting for processes to exit.
Oct  9 05:03:54 np0005478302 systemd-logind[745]: New session 3 of user zuul.
Oct  9 05:03:54 np0005478302 systemd[1]: Started Session 3 of User zuul.
Oct  9 05:03:54 np0005478302 python3[4097]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:03:54 np0005478302 python3[4170]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760000634.5367167-379-160084129523069/source _original_basename=tmpmwqt8uy_ follow=False checksum=26ebf755fae5a80bfc5f098245c8908b029e5df9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:03:57 np0005478302 systemd[1]: session-3.scope: Deactivated successfully.
Oct  9 05:03:57 np0005478302 systemd-logind[745]: Session 3 logged out. Waiting for processes to exit.
Oct  9 05:03:57 np0005478302 systemd-logind[745]: Removed session 3.
Oct  9 05:06:24 np0005478302 systemd[1032]: Created slice User Background Tasks Slice.
Oct  9 05:06:24 np0005478302 systemd[1032]: Starting Cleanup of User's Temporary Files and Directories...
Oct  9 05:06:24 np0005478302 systemd[1032]: Finished Cleanup of User's Temporary Files and Directories.
Oct  9 05:08:54 np0005478302 systemd-logind[745]: New session 4 of user zuul.
Oct  9 05:08:54 np0005478302 systemd[1]: Started Session 4 of User zuul.
Oct  9 05:08:54 np0005478302 python3[4227]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e08-49e2-2dac-3627-000000001cfc-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:54 np0005478302 python3[4256]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:54 np0005478302 python3[4282]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:55 np0005478302 python3[4308]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:55 np0005478302 python3[4334]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:56 np0005478302 python3[4361]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:56 np0005478302 python3[4361]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  9 05:08:56 np0005478302 python3[4387]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 05:08:56 np0005478302 systemd[1]: Reloading.
Oct  9 05:08:56 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:08:58 np0005478302 python3[4443]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  9 05:08:58 np0005478302 python3[4469]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:58 np0005478302 python3[4497]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:58 np0005478302 python3[4525]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:59 np0005478302 python3[4553]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:59 np0005478302 python3[4580]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e08-49e2-2dac-3627-000000001d02-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:09:00 np0005478302 python3[4610]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:09:02 np0005478302 systemd[1]: session-4.scope: Deactivated successfully.
Oct  9 05:09:02 np0005478302 systemd[1]: session-4.scope: Consumed 2.439s CPU time.
Oct  9 05:09:02 np0005478302 systemd-logind[745]: Session 4 logged out. Waiting for processes to exit.
Oct  9 05:09:02 np0005478302 systemd-logind[745]: Removed session 4.
Oct  9 05:09:04 np0005478302 systemd-logind[745]: New session 5 of user zuul.
Oct  9 05:09:04 np0005478302 systemd[1]: Started Session 5 of User zuul.
Oct  9 05:09:04 np0005478302 python3[4645]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 05:09:47 np0005478302 kernel: SELinux:  Converting 365 SID table entries...
Oct  9 05:09:47 np0005478302 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:09:47 np0005478302 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:09:47 np0005478302 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:09:47 np0005478302 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:09:47 np0005478302 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:09:47 np0005478302 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:09:47 np0005478302 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:09:53 np0005478302 kernel: SELinux:  Converting 365 SID table entries...
Oct  9 05:09:53 np0005478302 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:09:53 np0005478302 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:09:53 np0005478302 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:09:53 np0005478302 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:09:53 np0005478302 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:09:53 np0005478302 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:09:53 np0005478302 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:10:00 np0005478302 kernel: SELinux:  Converting 365 SID table entries...
Oct  9 05:10:00 np0005478302 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:10:00 np0005478302 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:10:00 np0005478302 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:10:00 np0005478302 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:10:00 np0005478302 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:10:00 np0005478302 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:10:00 np0005478302 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:10:01 np0005478302 setsebool[4734]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  9 05:10:01 np0005478302 setsebool[4734]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  9 05:10:09 np0005478302 kernel: SELinux:  Converting 368 SID table entries...
Oct  9 05:10:09 np0005478302 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:10:09 np0005478302 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:10:09 np0005478302 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:10:09 np0005478302 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:10:09 np0005478302 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:10:09 np0005478302 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:10:09 np0005478302 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:10:22 np0005478302 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 05:10:22 np0005478302 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:10:22 np0005478302 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:10:22 np0005478302 systemd[1]: Reloading.
Oct  9 05:10:22 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:10:22 np0005478302 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:10:22 np0005478302 systemd[1]: Starting PackageKit Daemon...
Oct  9 05:10:22 np0005478302 systemd[1]: Starting Authorization Manager...
Oct  9 05:10:22 np0005478302 polkitd[6848]: Started polkitd version 0.117
Oct  9 05:10:22 np0005478302 systemd[1]: Started Authorization Manager.
Oct  9 05:10:22 np0005478302 systemd[1]: Started PackageKit Daemon.
Oct  9 05:10:26 np0005478302 python3[10884]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e08-49e2-9746-57c4-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:10:27 np0005478302 kernel: evm: overlay not supported
Oct  9 05:10:27 np0005478302 systemd[1032]: Starting D-Bus User Message Bus...
Oct  9 05:10:27 np0005478302 dbus-broker-launch[11599]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  9 05:10:27 np0005478302 dbus-broker-launch[11599]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  9 05:10:27 np0005478302 systemd[1032]: Started D-Bus User Message Bus.
Oct  9 05:10:27 np0005478302 dbus-broker-lau[11599]: Ready
Oct  9 05:10:27 np0005478302 systemd[1032]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 05:10:27 np0005478302 systemd[1032]: Created slice Slice /user.
Oct  9 05:10:27 np0005478302 systemd[1032]: podman-11527.scope: unit configures an IP firewall, but not running as root.
Oct  9 05:10:27 np0005478302 systemd[1032]: (This warning is only shown for the first unit using IP firewalling.)
Oct  9 05:10:27 np0005478302 systemd[1032]: Started podman-11527.scope.
Oct  9 05:10:27 np0005478302 systemd[1032]: Started podman-pause-1bea46e7.scope.
Oct  9 05:10:28 np0005478302 systemd[1]: session-5.scope: Deactivated successfully.
Oct  9 05:10:28 np0005478302 systemd[1]: session-5.scope: Consumed 51.052s CPU time.
Oct  9 05:10:28 np0005478302 systemd-logind[745]: Session 5 logged out. Waiting for processes to exit.
Oct  9 05:10:28 np0005478302 systemd-logind[745]: Removed session 5.
Oct  9 05:10:45 np0005478302 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:10:45 np0005478302 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:10:45 np0005478302 systemd[1]: man-db-cache-update.service: Consumed 28.472s CPU time.
Oct  9 05:10:45 np0005478302 systemd[1]: run-r4691edcb0a074638b3ef7c77a2b8e847.service: Deactivated successfully.
Oct  9 05:10:50 np0005478302 systemd-logind[745]: New session 6 of user zuul.
Oct  9 05:10:50 np0005478302 systemd[1]: Started Session 6 of User zuul.
Oct  9 05:10:50 np0005478302 python3[26214]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxh/nv6sQLW1yzvGqXNfnJZOZRxYC8qJcgS1V4mG6Ez91eTuQ+QeRIx7PiC27aRMgFhv+XrMbKb0XUoGYd1TGk= zuul@np0005478301#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:10:50 np0005478302 python3[26240]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxh/nv6sQLW1yzvGqXNfnJZOZRxYC8qJcgS1V4mG6Ez91eTuQ+QeRIx7PiC27aRMgFhv+XrMbKb0XUoGYd1TGk= zuul@np0005478301#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:10:51 np0005478302 python3[26266]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005478302 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  9 05:10:51 np0005478302 python3[26300]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxh/nv6sQLW1yzvGqXNfnJZOZRxYC8qJcgS1V4mG6Ez91eTuQ+QeRIx7PiC27aRMgFhv+XrMbKb0XUoGYd1TGk= zuul@np0005478301#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:10:52 np0005478302 python3[26378]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:10:52 np0005478302 python3[26451]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760001052.0774846-152-91523367617337/source _original_basename=tmpy6ph1_61 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:10:53 np0005478302 python3[26501]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct  9 05:10:53 np0005478302 systemd[1]: Starting Hostname Service...
Oct  9 05:10:53 np0005478302 systemd[1]: Started Hostname Service.
Oct  9 05:10:53 np0005478302 systemd-hostnamed[26505]: Changed pretty hostname to 'compute-0'
Oct  9 05:10:53 np0005478302 systemd-hostnamed[26505]: Hostname set to <compute-0> (static)
Oct  9 05:10:53 np0005478302 NetworkManager[3909]: <info>  [1760001053.3683] hostname: static hostname changed from "np0005478302" to "compute-0"
Oct  9 05:10:53 np0005478302 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:10:53 np0005478302 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:10:54 np0005478302 systemd[1]: session-6.scope: Deactivated successfully.
Oct  9 05:10:54 np0005478302 systemd[1]: session-6.scope: Consumed 1.640s CPU time.
Oct  9 05:10:54 np0005478302 systemd-logind[745]: Session 6 logged out. Waiting for processes to exit.
Oct  9 05:10:54 np0005478302 systemd-logind[745]: Removed session 6.
Oct  9 05:11:03 np0005478302 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:11:23 np0005478302 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:14:04 np0005478302 systemd-logind[745]: New session 7 of user zuul.
Oct  9 05:14:04 np0005478302 systemd[1]: Started Session 7 of User zuul.
Oct  9 05:14:05 np0005478302 python3[26598]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:14:06 np0005478302 python3[26710]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:07 np0005478302 python3[26783]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.470579-30887-198629391554841/source mode=0755 _original_basename=delorean.repo follow=False checksum=e6ffbe2bc1ecfd38ca5198d3750b43ac3a0e1ed6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:07 np0005478302 python3[26809]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:07 np0005478302 python3[26882]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.470579-30887-198629391554841/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=717d1fa230cffa8c08764d71bd0b4a50d3a90cae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:07 np0005478302 python3[26908]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:07 np0005478302 python3[26981]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.470579-30887-198629391554841/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=8163d09913b97597f86e38eb45c3003e91da783e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:08 np0005478302 python3[27007]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:08 np0005478302 python3[27080]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.470579-30887-198629391554841/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=d108d0750ad5b288ccc41bc6534ea307cc51e987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:08 np0005478302 python3[27106]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:08 np0005478302 python3[27179]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.470579-30887-198629391554841/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=20c3917c672c059a872cf09a437f61890d2f89fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:08 np0005478302 python3[27205]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:09 np0005478302 python3[27278]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.470579-30887-198629391554841/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=4d14f168e8a0e6930d905faffbcdf4fedd6664d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:09 np0005478302 python3[27304]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:09 np0005478302 python3[27377]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.470579-30887-198629391554841/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=75ca8f9fe9a538824fd094f239c30e8ce8652e8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:18 np0005478302 python3[27435]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:15:24 np0005478302 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  9 05:15:24 np0005478302 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  9 05:15:24 np0005478302 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  9 05:15:24 np0005478302 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  9 05:15:28 np0005478302 systemd[1]: packagekit.service: Deactivated successfully.
Oct  9 05:19:17 np0005478302 systemd[1]: session-7.scope: Deactivated successfully.
Oct  9 05:19:17 np0005478302 systemd[1]: session-7.scope: Consumed 3.246s CPU time.
Oct  9 05:19:17 np0005478302 systemd-logind[745]: Session 7 logged out. Waiting for processes to exit.
Oct  9 05:19:17 np0005478302 systemd-logind[745]: Removed session 7.
Oct  9 05:24:26 np0005478302 systemd-logind[745]: New session 8 of user zuul.
Oct  9 05:24:26 np0005478302 systemd[1]: Started Session 8 of User zuul.
Oct  9 05:24:26 np0005478302 python3.9[27597]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:24:27 np0005478302 python3.9[27778]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:24:36 np0005478302 systemd[1]: session-8.scope: Deactivated successfully.
Oct  9 05:24:36 np0005478302 systemd[1]: session-8.scope: Consumed 6.164s CPU time.
Oct  9 05:24:36 np0005478302 systemd-logind[745]: Session 8 logged out. Waiting for processes to exit.
Oct  9 05:24:36 np0005478302 systemd-logind[745]: Removed session 8.
Oct  9 05:24:51 np0005478302 systemd-logind[745]: New session 9 of user zuul.
Oct  9 05:24:51 np0005478302 systemd[1]: Started Session 9 of User zuul.
Oct  9 05:24:51 np0005478302 python3.9[27990]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  9 05:24:52 np0005478302 python3.9[28164]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:24:53 np0005478302 python3.9[28316]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:24:53 np0005478302 python3.9[28469]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:24:54 np0005478302 python3.9[28621]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:24:55 np0005478302 python3.9[28773]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:24:55 np0005478302 python3.9[28896]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760001894.8200307-177-231203569439593/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:24:56 np0005478302 python3.9[29048]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:24:56 np0005478302 python3.9[29204]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:24:57 np0005478302 python3.9[29354]: ansible-ansible.builtin.service_facts Invoked
Oct  9 05:24:59 np0005478302 python3.9[29609]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:25:00 np0005478302 python3.9[29759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:25:01 np0005478302 python3.9[29913]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:25:02 np0005478302 python3.9[30071]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:25:02 np0005478302 python3.9[30155]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:26:20 np0005478302 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Oct  9 05:26:20 np0005478302 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Oct  9 05:26:20 np0005478302 dbus-broker-launch[11599]: Noticed file-system modification, trigger reload.
Oct  9 05:26:20 np0005478302 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Oct  9 05:26:20 np0005478302 dbus-broker-launch[11599]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  9 05:26:20 np0005478302 dbus-broker-launch[11599]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  9 05:26:20 np0005478302 systemd[1]: Reexecuting.
Oct  9 05:26:20 np0005478302 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 05:26:20 np0005478302 systemd: Detected virtualization kvm.
Oct  9 05:26:20 np0005478302 systemd: Detected architecture x86-64.
Oct  9 05:26:20 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:20 np0005478302 systemd[1]: Reloading.
Oct  9 05:26:20 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:20 np0005478302 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  9 05:26:21 np0005478302 systemd[1]: Reloading.
Oct  9 05:26:21 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:21 np0005478302 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  9 05:26:21 np0005478302 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  9 05:26:21 np0005478302 systemd[1]: Reloading.
Oct  9 05:26:21 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:21 np0005478302 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  9 05:26:21 np0005478302 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Oct  9 05:26:21 np0005478302 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Oct  9 05:26:21 np0005478302 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Oct  9 05:27:05 np0005478302 kernel: SELinux:  Converting 2714 SID table entries...
Oct  9 05:27:05 np0005478302 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:27:05 np0005478302 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:27:05 np0005478302 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:27:05 np0005478302 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:27:05 np0005478302 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:27:05 np0005478302 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:27:05 np0005478302 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:27:06 np0005478302 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  9 05:27:06 np0005478302 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:27:06 np0005478302 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:27:06 np0005478302 systemd[1]: Reloading.
Oct  9 05:27:06 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:06 np0005478302 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:27:06 np0005478302 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:27:06 np0005478302 systemd-journald[650]: Received SIGTERM from PID 1 (systemd).
Oct  9 05:27:06 np0005478302 systemd: Stopping Journal Service...
Oct  9 05:27:06 np0005478302 systemd: Stopping Rule-based Manager for Device Events and Files...
Oct  9 05:27:06 np0005478302 systemd-journald[650]: Journal stopped
Oct  9 05:27:06 np0005478302 systemd: systemd-journald.service: Deactivated successfully.
Oct  9 05:27:06 np0005478302 systemd: Stopped Journal Service.
Oct  9 05:27:06 np0005478302 systemd: Starting Journal Service...
Oct  9 05:27:06 np0005478302 systemd: systemd-udevd.service: Deactivated successfully.
Oct  9 05:27:06 np0005478302 systemd: Stopped Rule-based Manager for Device Events and Files.
Oct  9 05:27:06 np0005478302 systemd: systemd-udevd.service: Consumed 1.274s CPU time.
Oct  9 05:27:06 np0005478302 systemd: Starting Rule-based Manager for Device Events and Files...
Oct  9 05:27:06 np0005478302 systemd-journald[30903]: Journal started
Oct  9 05:27:06 np0005478302 systemd-journald[30903]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:27:06 np0005478302 systemd: Started Journal Service.
Oct  9 05:27:06 np0005478302 systemd-udevd[30915]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 05:27:06 np0005478302 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 05:27:06 np0005478302 systemd[1]: Reloading.
Oct  9 05:27:06 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:06 np0005478302 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:27:07 np0005478302 systemd[1]: Starting PackageKit Daemon...
Oct  9 05:27:07 np0005478302 systemd[1]: Started PackageKit Daemon.
Oct  9 05:27:08 np0005478302 python3.9[33436]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:09 np0005478302 python3.9[36216]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  9 05:27:10 np0005478302 python3.9[37733]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  9 05:27:11 np0005478302 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:27:11 np0005478302 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:27:11 np0005478302 systemd[1]: man-db-cache-update.service: Consumed 6.176s CPU time.
Oct  9 05:27:11 np0005478302 systemd[1]: run-r82ba1ea67e334e48b37ab0799cef4c80.service: Deactivated successfully.
Oct  9 05:27:11 np0005478302 systemd[1]: run-rc115d0b395aa41c592bab4b266ee3b28.service: Deactivated successfully.
Oct  9 05:27:12 np0005478302 python3.9[39239]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:27:12 np0005478302 python3.9[39391]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  9 05:27:14 np0005478302 python3.9[39543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:14 np0005478302 python3.9[39695]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:27:14 np0005478302 python3.9[39818]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002034.165458-639-177614537811413/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:27:18 np0005478302 python3.9[39970]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  9 05:27:19 np0005478302 python3.9[40123]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 05:27:19 np0005478302 python3.9[40281]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 05:27:19 np0005478302 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 05:27:20 np0005478302 python3.9[40442]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  9 05:27:20 np0005478302 python3.9[40595]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 05:27:21 np0005478302 python3.9[40753]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  9 05:27:22 np0005478302 python3.9[40905]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:27:23 np0005478302 python3.9[41058]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:24 np0005478302 python3.9[41210]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:27:24 np0005478302 python3.9[41333]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002043.8318367-924-8838809529180/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:25 np0005478302 python3.9[41485]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:27:25 np0005478302 systemd[1]: Starting Load Kernel Modules...
Oct  9 05:27:25 np0005478302 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  9 05:27:25 np0005478302 kernel: Bridge firewalling registered
Oct  9 05:27:25 np0005478302 systemd-modules-load[41489]: Inserted module 'br_netfilter'
Oct  9 05:27:25 np0005478302 systemd[1]: Finished Load Kernel Modules.
Oct  9 05:27:25 np0005478302 python3.9[41644]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:27:26 np0005478302 python3.9[41767]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002045.5485044-993-204082786559597/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:26 np0005478302 python3.9[41919]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:27:30 np0005478302 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Oct  9 05:27:30 np0005478302 dbus-broker-launch[722]: Noticed file-system modification, trigger reload.
Oct  9 05:27:30 np0005478302 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:27:30 np0005478302 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:27:30 np0005478302 systemd[1]: Reloading.
Oct  9 05:27:31 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:31 np0005478302 systemd[1]: Starting dnf makecache...
Oct  9 05:27:31 np0005478302 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:27:31 np0005478302 dnf[41991]: Failed determining last makecache time.
Oct  9 05:27:31 np0005478302 dnf[41991]: delorean-openstack-barbican-42b4c41831408a8e323  19 kB/s | 3.0 kB     00:00
Oct  9 05:27:31 np0005478302 dnf[41991]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7  20 kB/s | 3.0 kB     00:00
Oct  9 05:27:31 np0005478302 dnf[41991]: delorean-openstack-cinder-1c00d6490d88e436f26ef  21 kB/s | 3.0 kB     00:00
Oct  9 05:27:31 np0005478302 dnf[41991]: delorean-python-stevedore-c4acc5639fd2329372142  20 kB/s | 3.0 kB     00:00
Oct  9 05:27:32 np0005478302 dnf[41991]: delorean-python-cloudkitty-tests-tempest-3961dc  19 kB/s | 3.0 kB     00:00
Oct  9 05:27:32 np0005478302 dnf[41991]: delorean-diskimage-builder-43381184423c185801b5  19 kB/s | 3.0 kB     00:00
Oct  9 05:27:32 np0005478302 dnf[41991]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  21 kB/s | 3.0 kB     00:00
Oct  9 05:27:32 np0005478302 dnf[41991]: delorean-python-designate-tests-tempest-347fdbc  20 kB/s | 3.0 kB     00:00
Oct  9 05:27:32 np0005478302 dnf[41991]: delorean-openstack-glance-1fd12c29b339f30fe823e  20 kB/s | 3.0 kB     00:00
Oct  9 05:27:32 np0005478302 dnf[41991]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  19 kB/s | 3.0 kB     00:00
Oct  9 05:27:32 np0005478302 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:27:32 np0005478302 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:27:32 np0005478302 systemd[1]: man-db-cache-update.service: Consumed 2.569s CPU time.
Oct  9 05:27:32 np0005478302 systemd[1]: run-r592bd4b97a69411ba37b9fb592da1eff.service: Deactivated successfully.
Oct  9 05:27:33 np0005478302 dnf[41991]: delorean-openstack-manila-3c01b7181572c95dac462  19 kB/s | 3.0 kB     00:00
Oct  9 05:27:33 np0005478302 dnf[41991]: delorean-python-vmware-nsxlib-458234972d1428ac9  20 kB/s | 3.0 kB     00:00
Oct  9 05:27:33 np0005478302 dnf[41991]: delorean-openstack-octavia-ba397f07a7331190208c  19 kB/s | 3.0 kB     00:00
Oct  9 05:27:33 np0005478302 dnf[41991]: delorean-openstack-watcher-c014f81a8647287f6dcc  19 kB/s | 3.0 kB     00:00
Oct  9 05:27:33 np0005478302 dnf[41991]: delorean-edpm-image-builder-55ba53cf215b14ed95b  20 kB/s | 3.0 kB     00:00
Oct  9 05:27:33 np0005478302 python3.9[45492]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:27:33 np0005478302 dnf[41991]: delorean-puppet-ceph-b0c245ccde541a63fde0564366  19 kB/s | 3.0 kB     00:00
Oct  9 05:27:33 np0005478302 dnf[41991]: delorean-openstack-swift-dc98a8463506ac520c469a  19 kB/s | 3.0 kB     00:00
Oct  9 05:27:34 np0005478302 dnf[41991]: delorean-python-tempestconf-8515371b7cceebd4282  19 kB/s | 3.0 kB     00:00
Oct  9 05:27:34 np0005478302 dnf[41991]: delorean-openstack-heat-ui-013accbfd179753bc3f0  20 kB/s | 3.0 kB     00:00
Oct  9 05:27:34 np0005478302 python3.9[45649]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  9 05:27:34 np0005478302 dnf[41991]: CentOS Stream 9 - BaseOS                         16 kB/s | 6.1 kB     00:00
Oct  9 05:27:35 np0005478302 python3.9[45800]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:27:35 np0005478302 dnf[41991]: CentOS Stream 9 - AppStream                      17 kB/s | 6.5 kB     00:00
Oct  9 05:27:35 np0005478302 python3.9[45953]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:35 np0005478302 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 05:27:36 np0005478302 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 05:27:36 np0005478302 python3.9[46326]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:27:36 np0005478302 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  9 05:27:36 np0005478302 systemd[1]: tuned.service: Deactivated successfully.
Oct  9 05:27:36 np0005478302 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  9 05:27:36 np0005478302 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 05:27:36 np0005478302 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 05:27:37 np0005478302 dnf[41991]: CentOS Stream 9 - CRB                           3.2 kB/s | 6.0 kB     00:01
Oct  9 05:27:37 np0005478302 python3.9[46488]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  9 05:27:37 np0005478302 dnf[41991]: CentOS Stream 9 - Extras packages                16 kB/s | 8.0 kB     00:00
Oct  9 05:27:37 np0005478302 dnf[41991]: dlrn-antelope-testing                            19 kB/s | 3.0 kB     00:00
Oct  9 05:27:37 np0005478302 dnf[41991]: dlrn-antelope-build-deps                         20 kB/s | 3.0 kB     00:00
Oct  9 05:27:39 np0005478302 dnf[41991]: centos9-rabbitmq                                2.1 kB/s | 3.0 kB     00:01
Oct  9 05:27:39 np0005478302 dnf[41991]: centos9-storage                                 6.6 kB/s | 3.0 kB     00:00
Oct  9 05:27:39 np0005478302 python3.9[46646]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:27:40 np0005478302 systemd[1]: Reloading.
Oct  9 05:27:40 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:40 np0005478302 python3.9[46837]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:27:40 np0005478302 systemd[1]: Reloading.
Oct  9 05:27:40 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:41 np0005478302 dnf[41991]: centos9-opstools                                2.5 kB/s | 3.0 kB     00:01
Oct  9 05:27:41 np0005478302 python3.9[47027]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:41 np0005478302 python3.9[47180]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:41 np0005478302 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  9 05:27:42 np0005478302 dnf[41991]: NFV SIG OpenvSwitch                             2.1 kB/s | 3.0 kB     00:01
Oct  9 05:27:42 np0005478302 python3.9[47334]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:42 np0005478302 dnf[41991]: repo-setup-centos-appstream                      10 kB/s | 4.4 kB     00:00
Oct  9 05:27:43 np0005478302 dnf[41991]: repo-setup-centos-baseos                        9.2 kB/s | 3.9 kB     00:00
Oct  9 05:27:44 np0005478302 python3.9[47502]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:44 np0005478302 python3.9[47655]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:27:44 np0005478302 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 05:27:44 np0005478302 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 05:27:44 np0005478302 systemd[1]: Stopping Apply Kernel Variables...
Oct  9 05:27:44 np0005478302 systemd[1]: Starting Apply Kernel Variables...
Oct  9 05:27:44 np0005478302 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  9 05:27:44 np0005478302 systemd[1]: Finished Apply Kernel Variables.
Oct  9 05:27:44 np0005478302 dnf[41991]: repo-setup-centos-highavailability              3.1 kB/s | 3.9 kB     00:01
Oct  9 05:27:45 np0005478302 systemd[1]: session-9.scope: Deactivated successfully.
Oct  9 05:27:45 np0005478302 systemd[1]: session-9.scope: Consumed 1min 38.685s CPU time.
Oct  9 05:27:45 np0005478302 systemd-logind[745]: Session 9 logged out. Waiting for processes to exit.
Oct  9 05:27:45 np0005478302 systemd-logind[745]: Removed session 9.
Oct  9 05:27:46 np0005478302 dnf[41991]: repo-setup-centos-powertools                    3.0 kB/s | 4.3 kB     00:01
Oct  9 05:27:46 np0005478302 dnf[41991]: Extra Packages for Enterprise Linux 9 - x86_64   74 kB/s |  30 kB     00:00
Oct  9 05:27:46 np0005478302 dnf[41991]: Metadata cache created.
Oct  9 05:27:47 np0005478302 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  9 05:27:47 np0005478302 systemd[1]: Finished dnf makecache.
Oct  9 05:27:47 np0005478302 systemd[1]: dnf-makecache.service: Consumed 1.268s CPU time.
Oct  9 05:27:50 np0005478302 systemd-logind[745]: New session 10 of user zuul.
Oct  9 05:27:50 np0005478302 systemd[1]: Started Session 10 of User zuul.
Oct  9 05:27:51 np0005478302 python3.9[47841]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:27:52 np0005478302 python3.9[47997]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  9 05:27:52 np0005478302 python3.9[48150]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 05:27:53 np0005478302 python3.9[48308]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 05:27:54 np0005478302 python3.9[48468]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:27:54 np0005478302 python3.9[48552]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 05:28:04 np0005478302 python3.9[48717]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:12 np0005478302 kernel: SELinux:  Converting 2726 SID table entries...
Oct  9 05:28:13 np0005478302 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:28:13 np0005478302 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:28:13 np0005478302 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:28:13 np0005478302 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:28:13 np0005478302 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:28:13 np0005478302 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:28:13 np0005478302 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:28:13 np0005478302 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  9 05:28:13 np0005478302 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  9 05:28:13 np0005478302 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:28:13 np0005478302 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:28:14 np0005478302 systemd[1]: Reloading.
Oct  9 05:28:14 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:14 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:14 np0005478302 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:28:14 np0005478302 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:28:14 np0005478302 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:28:14 np0005478302 systemd[1]: run-rd14117d4fc9841b889e3dbcd67ff4902.service: Deactivated successfully.
Oct  9 05:28:15 np0005478302 python3.9[49819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 05:28:15 np0005478302 systemd[1]: Reloading.
Oct  9 05:28:15 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:15 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:15 np0005478302 systemd[1]: Starting Open vSwitch Database Unit...
Oct  9 05:28:15 np0005478302 chown[49860]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  9 05:28:15 np0005478302 ovs-ctl[49865]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  9 05:28:15 np0005478302 ovs-ctl[49865]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  9 05:28:15 np0005478302 ovs-ctl[49865]: Starting ovsdb-server [  OK  ]
Oct  9 05:28:15 np0005478302 ovs-vsctl[49914]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  9 05:28:15 np0005478302 ovs-vsctl[49934]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"ef217152-08e8-40c8-a663-3565c5b77d4a\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  9 05:28:15 np0005478302 ovs-ctl[49865]: Configuring Open vSwitch system IDs [  OK  ]
Oct  9 05:28:15 np0005478302 ovs-ctl[49865]: Enabling remote OVSDB managers [  OK  ]
Oct  9 05:28:15 np0005478302 systemd[1]: Started Open vSwitch Database Unit.
Oct  9 05:28:15 np0005478302 ovs-vsctl[49941]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  9 05:28:15 np0005478302 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  9 05:28:15 np0005478302 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  9 05:28:15 np0005478302 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  9 05:28:15 np0005478302 kernel: openvswitch: Open vSwitch switching datapath
Oct  9 05:28:15 np0005478302 ovs-ctl[49984]: Inserting openvswitch module [  OK  ]
Oct  9 05:28:15 np0005478302 ovs-ctl[49953]: Starting ovs-vswitchd [  OK  ]
Oct  9 05:28:15 np0005478302 ovs-ctl[49953]: Enabling remote OVSDB managers [  OK  ]
Oct  9 05:28:15 np0005478302 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  9 05:28:15 np0005478302 systemd[1]: Starting Open vSwitch...
Oct  9 05:28:15 np0005478302 systemd[1]: Finished Open vSwitch.
Oct  9 05:28:15 np0005478302 ovs-vsctl[50003]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  9 05:28:16 np0005478302 python3.9[50153]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:28:17 np0005478302 python3.9[50305]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  9 05:28:18 np0005478302 kernel: SELinux:  Converting 2740 SID table entries...
Oct  9 05:28:18 np0005478302 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:28:18 np0005478302 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:28:18 np0005478302 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:28:18 np0005478302 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:28:18 np0005478302 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:28:18 np0005478302 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:28:18 np0005478302 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:28:18 np0005478302 python3.9[50460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:28:19 np0005478302 dbus-broker-launch[733]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  9 05:28:19 np0005478302 python3.9[50618]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:20 np0005478302 python3.9[50771]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:28:22 np0005478302 python3.9[51058]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 05:28:22 np0005478302 python3.9[51208]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:28:23 np0005478302 python3.9[51362]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:25 np0005478302 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:28:25 np0005478302 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:28:25 np0005478302 systemd[1]: Reloading.
Oct  9 05:28:25 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:25 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:25 np0005478302 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:28:25 np0005478302 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:28:25 np0005478302 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:28:25 np0005478302 systemd[1]: run-r97e01d9b464e46599165ea4a57efc6c3.service: Deactivated successfully.
Oct  9 05:28:26 np0005478302 python3.9[51679]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:28:26 np0005478302 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  9 05:28:26 np0005478302 systemd[1]: Stopped Network Manager Wait Online.
Oct  9 05:28:26 np0005478302 systemd[1]: Stopping Network Manager Wait Online...
Oct  9 05:28:26 np0005478302 NetworkManager[3909]: <info>  [1760002106.9904] caught SIGTERM, shutting down normally.
Oct  9 05:28:26 np0005478302 systemd[1]: Stopping Network Manager...
Oct  9 05:28:26 np0005478302 NetworkManager[3909]: <info>  [1760002106.9913] dhcp4 (eth0): canceled DHCP transaction
Oct  9 05:28:26 np0005478302 NetworkManager[3909]: <info>  [1760002106.9913] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:26 np0005478302 NetworkManager[3909]: <info>  [1760002106.9913] dhcp4 (eth0): state changed no lease
Oct  9 05:28:26 np0005478302 NetworkManager[3909]: <info>  [1760002106.9914] dhcp6 (eth0): canceled DHCP transaction
Oct  9 05:28:26 np0005478302 NetworkManager[3909]: <info>  [1760002106.9914] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:26 np0005478302 NetworkManager[3909]: <info>  [1760002106.9914] dhcp6 (eth0): state changed no lease
Oct  9 05:28:26 np0005478302 NetworkManager[3909]: <info>  [1760002106.9916] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:28:26 np0005478302 NetworkManager[3909]: <info>  [1760002106.9949] exiting (success)
Oct  9 05:28:27 np0005478302 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:28:27 np0005478302 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:28:27 np0005478302 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  9 05:28:27 np0005478302 systemd[1]: Stopped Network Manager.
Oct  9 05:28:27 np0005478302 systemd[1]: Starting Network Manager...
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.0552] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:7c020c8f-ae8f-497c-a51c-02a263af6717)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.0553] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.0596] manager[0x55818f620090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 05:28:27 np0005478302 systemd[1]: Starting Hostname Service...
Oct  9 05:28:27 np0005478302 systemd[1]: Started Hostname Service.
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1069] hostname: hostname: using hostnamed
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1070] hostname: static hostname changed from (none) to "compute-0"
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1073] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1076] manager[0x55818f620090]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1076] manager[0x55818f620090]: rfkill: WWAN hardware radio set enabled
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1093] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1099] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1100] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1100] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1103] manager: Networking is enabled by state file
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1104] settings: Loaded settings plugin: keyfile (internal)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1114] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1147] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1155] dhcp: init: Using DHCP client 'internal'
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1157] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1166] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1173] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1183] device (lo): Activation: starting connection 'lo' (42cdbca8-f689-4fdd-9617-072161b4803e)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1189] device (eth0): carrier: link connected
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1194] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1199] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1199] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1213] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1219] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1225] device (eth1): carrier: link connected
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1231] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1237] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (99381071-70a1-5f50-b83c-41d249156268) (indicated)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1237] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1243] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1250] device (eth1): Activation: starting connection 'ci-private-network' (99381071-70a1-5f50-b83c-41d249156268)
Oct  9 05:28:27 np0005478302 systemd[1]: Started Network Manager.
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1259] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1265] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1267] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1277] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1279] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1281] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1283] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1285] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1288] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1293] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1296] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1298] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1303] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1306] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1312] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1317] dhcp4 (eth0): state changed new lease, address=192.168.26.64
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1323] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1347] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1348] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1350] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1354] device (lo): Activation: successful, device activated.
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1359] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1361] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  9 05:28:27 np0005478302 NetworkManager[51695]: <info>  [1760002107.1363] device (eth1): Activation: successful, device activated.
Oct  9 05:28:27 np0005478302 systemd[1]: Starting Network Manager Wait Online...
Oct  9 05:28:27 np0005478302 python3.9[51888]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:28 np0005478302 NetworkManager[51695]: <info>  [1760002108.2367] dhcp6 (eth0): state changed new lease, address=2001:db8::186
Oct  9 05:28:28 np0005478302 NetworkManager[51695]: <info>  [1760002108.2378] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:28:28 np0005478302 NetworkManager[51695]: <info>  [1760002108.2400] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:28:28 np0005478302 NetworkManager[51695]: <info>  [1760002108.2401] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:28:28 np0005478302 NetworkManager[51695]: <info>  [1760002108.2404] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:28:28 np0005478302 NetworkManager[51695]: <info>  [1760002108.2407] device (eth0): Activation: successful, device activated.
Oct  9 05:28:28 np0005478302 NetworkManager[51695]: <info>  [1760002108.2410] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 05:28:28 np0005478302 NetworkManager[51695]: <info>  [1760002108.2419] manager: startup complete
Oct  9 05:28:28 np0005478302 systemd[1]: Finished Network Manager Wait Online.
Oct  9 05:28:33 np0005478302 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:28:33 np0005478302 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:28:33 np0005478302 systemd[1]: Reloading.
Oct  9 05:28:33 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:33 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:33 np0005478302 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:28:33 np0005478302 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:28:33 np0005478302 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:28:33 np0005478302 systemd[1]: run-r81d7c061905a443995c22f35b7c9d4e7.service: Deactivated successfully.
Oct  9 05:28:36 np0005478302 python3.9[52371]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:28:37 np0005478302 python3.9[52523]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:37 np0005478302 python3.9[52677]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:38 np0005478302 python3.9[52829]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:38 np0005478302 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:28:38 np0005478302 python3.9[52983]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:39 np0005478302 python3.9[53135]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:39 np0005478302 python3.9[53287]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:28:40 np0005478302 python3.9[53410]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002119.2367327-647-257429756980105/.source _original_basename=.rmhpktfx follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:40 np0005478302 python3.9[53562]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:41 np0005478302 python3.9[53714]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  9 05:28:41 np0005478302 python3.9[53866]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:43 np0005478302 python3.9[54293]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  9 05:28:44 np0005478302 ansible-async_wrapper.py[54468]: Invoked with j719406167700 300 /home/zuul/.ansible/tmp/ansible-tmp-1760002123.564968-845-163759947495998/AnsiballZ_edpm_os_net_config.py _
Oct  9 05:28:44 np0005478302 ansible-async_wrapper.py[54471]: Starting module and watcher
Oct  9 05:28:44 np0005478302 ansible-async_wrapper.py[54471]: Start watching 54472 (300)
Oct  9 05:28:44 np0005478302 ansible-async_wrapper.py[54472]: Start module (54472)
Oct  9 05:28:44 np0005478302 ansible-async_wrapper.py[54468]: Return async_wrapper task started.
Oct  9 05:28:44 np0005478302 python3.9[54473]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  9 05:28:44 np0005478302 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  9 05:28:44 np0005478302 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  9 05:28:44 np0005478302 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  9 05:28:44 np0005478302 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  9 05:28:44 np0005478302 kernel: cfg80211: failed to load regulatory.db
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6472] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6487] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6854] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6858] audit: op="connection-add" uuid="944812b3-3b90-47e3-8b93-838bc65c423a" name="br-ex-br" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6869] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6871] audit: op="connection-add" uuid="a60672d3-3db4-47e5-9ab7-f15def14768c" name="br-ex-port" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6882] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6884] audit: op="connection-add" uuid="2db852a8-ab77-4c6e-a5d1-216b537c5a68" name="eth1-port" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6895] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6897] audit: op="connection-add" uuid="04c091c8-5e99-4901-b4a0-c12c907af13d" name="vlan20-port" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6908] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6910] audit: op="connection-add" uuid="f882b807-3011-4187-9841-e387c4d2de4d" name="vlan21-port" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6921] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6923] audit: op="connection-add" uuid="9b349756-d27f-4c19-93fe-704e56edeac5" name="vlan22-port" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6933] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6936] audit: op="connection-add" uuid="2ea4eaee-669c-455a-920b-06e176356c59" name="vlan23-port" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6952] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.may-fail,ipv6.routes,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6966] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6969] audit: op="connection-add" uuid="46bc0613-40c6-4f7e-baf9-ff45a946f10a" name="br-ex-if" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.6995] audit: op="connection-update" uuid="99381071-70a1-5f50-b83c-41d249156268" name="ci-private-network" args="connection.timestamp,connection.slave-type,connection.controller,connection.master,connection.port-type,ipv4.routes,ipv4.never-default,ipv4.method,ipv4.addresses,ipv4.routing-rules,ipv4.dns,ovs-interface.type,ipv6.routes,ipv6.method,ipv6.addresses,ipv6.addr-gen-mode,ipv6.dns,ipv6.routing-rules,ovs-external-ids.data" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7008] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7011] audit: op="connection-add" uuid="f68223c9-22b5-4a22-91f1-248bbd45fbf6" name="vlan20-if" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7025] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7027] audit: op="connection-add" uuid="371dc3e7-0a85-453c-958d-dbfd32cbc4ba" name="vlan21-if" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7041] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7043] audit: op="connection-add" uuid="80a52acc-166f-460e-87df-b0382c1fb0a2" name="vlan22-if" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7057] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7059] audit: op="connection-add" uuid="ceaca123-ecf5-470a-80f3-07bc719dfebc" name="vlan23-if" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7069] audit: op="connection-delete" uuid="91b0cf9d-52d5-3c28-aeb9-d8a6541bcd9c" name="Wired connection 1" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7080] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7090] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7095] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (944812b3-3b90-47e3-8b93-838bc65c423a)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7097] audit: op="connection-activate" uuid="944812b3-3b90-47e3-8b93-838bc65c423a" name="br-ex-br" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7100] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7108] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7113] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a60672d3-3db4-47e5-9ab7-f15def14768c)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7116] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7123] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7128] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (2db852a8-ab77-4c6e-a5d1-216b537c5a68)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7130] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7138] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7143] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (04c091c8-5e99-4901-b4a0-c12c907af13d)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7145] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7153] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7158] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (f882b807-3011-4187-9841-e387c4d2de4d)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7160] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7168] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7174] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (9b349756-d27f-4c19-93fe-704e56edeac5)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7176] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7184] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7189] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (2ea4eaee-669c-455a-920b-06e176356c59)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7191] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7194] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7196] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7203] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7209] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7214] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (46bc0613-40c6-4f7e-baf9-ff45a946f10a)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7216] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7225] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7228] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7230] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7232] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7242] device (eth1): disconnecting for new activation request.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7243] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7245] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7247] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7248] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7250] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7254] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7257] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (f68223c9-22b5-4a22-91f1-248bbd45fbf6)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7258] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7261] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7262] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7263] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7265] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7269] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7272] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (371dc3e7-0a85-453c-958d-dbfd32cbc4ba)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7273] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7275] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7277] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7278] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7280] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7283] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7287] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (80a52acc-166f-460e-87df-b0382c1fb0a2)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7288] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7290] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7292] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7293] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7295] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7298] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7302] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (ceaca123-ecf5-470a-80f3-07bc719dfebc)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7303] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7305] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7307] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7308] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7309] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7319] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.may-fail,ipv6.routes,ipv6.method,ipv6.addr-gen-mode" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7321] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7324] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7325] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7330] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7333] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7336] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7338] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7340] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 kernel: ovs-system: entered promiscuous mode
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7344] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7347] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7351] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7352] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7356] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 kernel: Timeout policy base is empty
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7360] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7363] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7365] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7369] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 systemd-udevd[54478]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7378] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7432] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7433] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7436] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7439] dhcp4 (eth0): canceled DHCP transaction
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7439] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7439] dhcp4 (eth0): state changed no lease
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7439] dhcp6 (eth0): canceled DHCP transaction
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7439] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7439] dhcp6 (eth0): state changed no lease
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7443] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7449] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7453] audit: op="device-reapply" interface="eth1" ifindex=3 pid=54474 uid=0 result="fail" reason="Device is not activated"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7456] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7466] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7480] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7483] dhcp4 (eth0): state changed new lease, address=192.168.26.64
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7485] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7512] device (eth1): disconnecting for new activation request.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7513] audit: op="connection-activate" uuid="99381071-70a1-5f50-b83c-41d249156268" name="ci-private-network" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7513] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 05:28:45 np0005478302 kernel: br-ex: entered promiscuous mode
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7591] device (eth1): Activation: starting connection 'ci-private-network' (99381071-70a1-5f50-b83c-41d249156268)
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7593] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7604] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7606] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7610] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7613] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7618] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7619] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7620] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7621] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7622] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7623] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7624] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54474 uid=0 result="success"
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7625] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7630] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7633] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7635] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7638] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7639] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7643] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7645] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7648] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7650] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7653] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7655] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7658] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7662] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7664] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7696] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7697] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7701] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7705] device (eth1): Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 kernel: vlan22: entered promiscuous mode
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7731] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7752] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7753] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7757] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 kernel: vlan23: entered promiscuous mode
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7797] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478302 systemd-udevd[54480]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7809] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7833] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7834] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7838] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 kernel: vlan20: entered promiscuous mode
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7868] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7881] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7898] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7899] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7903] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7938] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7944] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 kernel: vlan21: entered promiscuous mode
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7971] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7972] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.7976] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478302 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.8053] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.8068] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.8098] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.8099] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478302 NetworkManager[51695]: <info>  [1760002125.8104] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:46 np0005478302 NetworkManager[51695]: <info>  [1760002126.9031] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.0121] checkpoint[0x55818f5f7950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.0122] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.1272] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.1282] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.2929] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.4105] checkpoint[0x55818f5f7a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.4108] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.6350] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.6362] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.8016] audit: op="networking-control" arg="global-dns-configuration" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.8030] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.8036] audit: op="networking-control" arg="global-dns-configuration" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.8065] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 python3.9[54829]: ansible-ansible.legacy.async_status Invoked with jid=j719406167700.54468 mode=status _async_dir=/root/.ansible_async
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.9178] checkpoint[0x55818f5f7af0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Oct  9 05:28:47 np0005478302 NetworkManager[51695]: <info>  [1760002127.9188] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54474 uid=0 result="success"
Oct  9 05:28:47 np0005478302 ansible-async_wrapper.py[54472]: Module complete (54472)
Oct  9 05:28:49 np0005478302 ansible-async_wrapper.py[54471]: Done in kid B.
Oct  9 05:28:51 np0005478302 python3.9[54933]: ansible-ansible.legacy.async_status Invoked with jid=j719406167700.54468 mode=status _async_dir=/root/.ansible_async
Oct  9 05:28:51 np0005478302 python3.9[55032]: ansible-ansible.legacy.async_status Invoked with jid=j719406167700.54468 mode=cleanup _async_dir=/root/.ansible_async
Oct  9 05:28:52 np0005478302 python3.9[55185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:28:52 np0005478302 python3.9[55308]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002131.7118993-926-59573423124325/.source.returncode _original_basename=.khgptrbg follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:52 np0005478302 python3.9[55460]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:28:53 np0005478302 python3.9[55583]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002132.6432128-974-61255909020983/.source.cfg _original_basename=.12opslyh follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:53 np0005478302 python3.9[55735]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:28:53 np0005478302 systemd[1]: Reloading Network Manager...
Oct  9 05:28:53 np0005478302 NetworkManager[51695]: <info>  [1760002133.9750] audit: op="reload" arg="0" pid=55739 uid=0 result="success"
Oct  9 05:28:53 np0005478302 NetworkManager[51695]: <info>  [1760002133.9755] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  9 05:28:53 np0005478302 NetworkManager[51695]: <info>  [1760002133.9756] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  9 05:28:53 np0005478302 systemd[1]: Reloaded Network Manager.
Oct  9 05:28:54 np0005478302 systemd[1]: session-10.scope: Deactivated successfully.
Oct  9 05:28:54 np0005478302 systemd[1]: session-10.scope: Consumed 35.495s CPU time.
Oct  9 05:28:54 np0005478302 systemd-logind[745]: Session 10 logged out. Waiting for processes to exit.
Oct  9 05:28:54 np0005478302 systemd-logind[745]: Removed session 10.
Oct  9 05:28:57 np0005478302 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:28:59 np0005478302 systemd-logind[745]: New session 11 of user zuul.
Oct  9 05:28:59 np0005478302 systemd[1]: Started Session 11 of User zuul.
Oct  9 05:29:00 np0005478302 python3.9[55925]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:00 np0005478302 python3.9[56079]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:01 np0005478302 python3.9[56273]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:29:01 np0005478302 systemd[1]: session-11.scope: Deactivated successfully.
Oct  9 05:29:01 np0005478302 systemd[1]: session-11.scope: Consumed 1.638s CPU time.
Oct  9 05:29:01 np0005478302 systemd-logind[745]: Session 11 logged out. Waiting for processes to exit.
Oct  9 05:29:01 np0005478302 systemd-logind[745]: Removed session 11.
Oct  9 05:29:03 np0005478302 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:29:06 np0005478302 systemd-logind[745]: New session 12 of user zuul.
Oct  9 05:29:06 np0005478302 systemd[1]: Started Session 12 of User zuul.
Oct  9 05:29:07 np0005478302 python3.9[56455]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:08 np0005478302 python3.9[56609]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:09 np0005478302 python3.9[56765]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:09 np0005478302 python3.9[56849]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:29:11 np0005478302 python3.9[57003]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:12 np0005478302 python3.9[57198]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:12 np0005478302 python3.9[57350]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:29:12 np0005478302 systemd[1]: var-lib-containers-storage-overlay-compat394710589-merged.mount: Deactivated successfully.
Oct  9 05:29:12 np0005478302 podman[57351]: 2025-10-09 09:29:12.839948551 +0000 UTC m=+0.025341272 system refresh
Oct  9 05:29:13 np0005478302 python3.9[57511]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:13 np0005478302 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 05:29:13 np0005478302 python3.9[57635]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002152.9618056-197-276217222017631/.source.json follow=False _original_basename=podman_network_config.j2 checksum=aff0b44a260430bd28f2398cfc76ff8d13746f14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:14 np0005478302 python3.9[57787]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:14 np0005478302 python3.9[57910]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002154.0395505-242-51494379355276/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:15 np0005478302 python3.9[58062]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:15 np0005478302 python3.9[58214]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:16 np0005478302 python3.9[58366]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:16 np0005478302 python3.9[58518]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:17 np0005478302 python3.9[58671]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:29:18 np0005478302 python3.9[58824]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:19 np0005478302 python3.9[58978]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:29:19 np0005478302 python3.9[59130]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:29:20 np0005478302 python3.9[59282]: ansible-service_facts Invoked
Oct  9 05:29:20 np0005478302 network[59299]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 05:29:20 np0005478302 network[59300]: 'network-scripts' will be removed from distribution in near future.
Oct  9 05:29:20 np0005478302 network[59301]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 05:29:23 np0005478302 python3.9[59755]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:29:25 np0005478302 python3.9[59908]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  9 05:29:26 np0005478302 python3.9[60060]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:27 np0005478302 python3.9[60185]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002166.53228-638-273089195007762/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:28 np0005478302 python3.9[60339]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:28 np0005478302 python3.9[60464]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002167.9599245-683-186141541154468/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:30 np0005478302 python3.9[60618]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:31 np0005478302 python3.9[60772]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:32 np0005478302 python3.9[60856]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:29:33 np0005478302 python3.9[61010]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:34 np0005478302 python3.9[61094]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:29:34 np0005478302 chronyd[752]: chronyd exiting
Oct  9 05:29:34 np0005478302 systemd[1]: Stopping NTP client/server...
Oct  9 05:29:34 np0005478302 systemd[1]: chronyd.service: Deactivated successfully.
Oct  9 05:29:34 np0005478302 systemd[1]: Stopped NTP client/server.
Oct  9 05:29:34 np0005478302 systemd[1]: Starting NTP client/server...
Oct  9 05:29:34 np0005478302 chronyd[61102]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 05:29:34 np0005478302 chronyd[61102]: Frequency -10.397 +/- 0.260 ppm read from /var/lib/chrony/drift
Oct  9 05:29:34 np0005478302 chronyd[61102]: Loaded seccomp filter (level 2)
Oct  9 05:29:34 np0005478302 systemd[1]: Started NTP client/server.
Oct  9 05:29:34 np0005478302 systemd[1]: session-12.scope: Deactivated successfully.
Oct  9 05:29:34 np0005478302 systemd[1]: session-12.scope: Consumed 17.415s CPU time.
Oct  9 05:29:34 np0005478302 systemd-logind[745]: Session 12 logged out. Waiting for processes to exit.
Oct  9 05:29:34 np0005478302 systemd-logind[745]: Removed session 12.
Oct  9 05:29:39 np0005478302 systemd-logind[745]: New session 13 of user zuul.
Oct  9 05:29:39 np0005478302 systemd[1]: Started Session 13 of User zuul.
Oct  9 05:29:40 np0005478302 python3.9[61283]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:40 np0005478302 python3.9[61435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:41 np0005478302 python3.9[61558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002180.3373914-62-173846596383321/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:41 np0005478302 systemd[1]: session-13.scope: Deactivated successfully.
Oct  9 05:29:41 np0005478302 systemd[1]: session-13.scope: Consumed 1.136s CPU time.
Oct  9 05:29:41 np0005478302 systemd-logind[745]: Session 13 logged out. Waiting for processes to exit.
Oct  9 05:29:41 np0005478302 systemd-logind[745]: Removed session 13.
Oct  9 05:29:47 np0005478302 systemd-logind[745]: New session 14 of user zuul.
Oct  9 05:29:47 np0005478302 systemd[1]: Started Session 14 of User zuul.
Oct  9 05:29:48 np0005478302 python3.9[61736]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:48 np0005478302 python3.9[61892]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:49 np0005478302 python3.9[62067]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:50 np0005478302 python3.9[62190]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760002189.0920806-83-24923755320808/.source.json _original_basename=.r6l8sm47 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:51 np0005478302 python3.9[62342]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:51 np0005478302 python3.9[62465]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002190.902763-152-40248120562679/.source _original_basename=.om7tcl5n follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:52 np0005478302 python3.9[62617]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:52 np0005478302 python3.9[62769]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:53 np0005478302 python3.9[62892]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002192.3686986-224-32856597022650/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:53 np0005478302 python3.9[63044]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:53 np0005478302 python3.9[63167]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002193.1840155-224-264822907968799/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:54 np0005478302 python3.9[63319]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:55 np0005478302 python3.9[63471]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:55 np0005478302 python3.9[63594]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002194.7978673-335-20565616421453/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:56 np0005478302 python3.9[63746]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:56 np0005478302 python3.9[63869]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002195.692038-380-80652160347206/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:57 np0005478302 python3.9[64021]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:29:57 np0005478302 systemd[1]: Reloading.
Oct  9 05:29:57 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:29:57 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:29:57 np0005478302 systemd[1]: Reloading.
Oct  9 05:29:57 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:29:57 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:29:57 np0005478302 systemd[1]: Starting EDPM Container Shutdown...
Oct  9 05:29:57 np0005478302 systemd[1]: Finished EDPM Container Shutdown.
Oct  9 05:29:58 np0005478302 python3.9[64248]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:58 np0005478302 python3.9[64371]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002197.83771-449-29831433283766/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:59 np0005478302 python3.9[64523]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:59 np0005478302 python3.9[64646]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002198.7838008-494-216692787790523/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:00 np0005478302 python3.9[64798]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:00 np0005478302 systemd[1]: Reloading.
Oct  9 05:30:00 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:00 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:00 np0005478302 systemd[1]: Reloading.
Oct  9 05:30:00 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:00 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:00 np0005478302 systemd[1]: Starting Create netns directory...
Oct  9 05:30:00 np0005478302 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 05:30:00 np0005478302 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 05:30:00 np0005478302 systemd[1]: Finished Create netns directory.
Oct  9 05:30:01 np0005478302 python3.9[65022]: ansible-ansible.builtin.service_facts Invoked
Oct  9 05:30:01 np0005478302 network[65039]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 05:30:01 np0005478302 network[65040]: 'network-scripts' will be removed from distribution in near future.
Oct  9 05:30:01 np0005478302 network[65041]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 05:30:03 np0005478302 python3.9[65305]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:03 np0005478302 systemd[1]: Reloading.
Oct  9 05:30:03 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:03 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:04 np0005478302 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  9 05:30:04 np0005478302 iptables.init[65344]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  9 05:30:04 np0005478302 iptables.init[65344]: iptables: Flushing firewall rules: [  OK  ]
Oct  9 05:30:04 np0005478302 systemd[1]: iptables.service: Deactivated successfully.
Oct  9 05:30:04 np0005478302 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  9 05:30:04 np0005478302 python3.9[65540]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:05 np0005478302 python3.9[65694]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:05 np0005478302 systemd[1]: Reloading.
Oct  9 05:30:05 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:05 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:05 np0005478302 systemd[1]: Starting Netfilter Tables...
Oct  9 05:30:05 np0005478302 systemd[1]: Finished Netfilter Tables.
Oct  9 05:30:06 np0005478302 python3.9[65886]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:30:07 np0005478302 python3.9[66039]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:07 np0005478302 python3.9[66164]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002207.0957942-701-32300244790318/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:08 np0005478302 python3.9[66315]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:30:33 np0005478302 systemd[1]: session-14.scope: Deactivated successfully.
Oct  9 05:30:33 np0005478302 systemd[1]: session-14.scope: Consumed 13.347s CPU time.
Oct  9 05:30:33 np0005478302 systemd-logind[745]: Session 14 logged out. Waiting for processes to exit.
Oct  9 05:30:33 np0005478302 systemd-logind[745]: Removed session 14.
Oct  9 05:30:45 np0005478302 systemd-logind[745]: New session 15 of user zuul.
Oct  9 05:30:45 np0005478302 systemd[1]: Started Session 15 of User zuul.
Oct  9 05:30:46 np0005478302 python3.9[66508]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:30:47 np0005478302 python3.9[66664]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:47 np0005478302 python3.9[66839]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:48 np0005478302 python3.9[66917]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.73t1uolb recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:49 np0005478302 python3.9[67069]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:49 np0005478302 python3.9[67147]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ey21arui recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:50 np0005478302 python3.9[67299]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:30:50 np0005478302 python3.9[67451]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:50 np0005478302 python3.9[67529]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:30:51 np0005478302 python3.9[67681]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:51 np0005478302 python3.9[67759]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:30:52 np0005478302 python3.9[67911]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:52 np0005478302 python3.9[68063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:53 np0005478302 python3.9[68141]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:53 np0005478302 python3.9[68293]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:54 np0005478302 python3.9[68371]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:54 np0005478302 python3.9[68523]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:55 np0005478302 systemd[1]: Reloading.
Oct  9 05:30:55 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:55 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:55 np0005478302 python3.9[68713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:56 np0005478302 python3.9[68791]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:56 np0005478302 python3.9[68943]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:57 np0005478302 python3.9[69021]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:57 np0005478302 python3.9[69173]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:57 np0005478302 systemd[1]: Reloading.
Oct  9 05:30:57 np0005478302 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:57 np0005478302 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:57 np0005478302 systemd[1]: Starting Create netns directory...
Oct  9 05:30:57 np0005478302 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 05:30:57 np0005478302 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 05:30:57 np0005478302 systemd[1]: Finished Create netns directory.
Oct  9 05:30:58 np0005478302 python3.9[69364]: ansible-ansible.builtin.service_facts Invoked
Oct  9 05:30:58 np0005478302 network[69381]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 05:30:58 np0005478302 network[69382]: 'network-scripts' will be removed from distribution in near future.
Oct  9 05:30:58 np0005478302 network[69383]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 05:31:01 np0005478302 python3.9[69646]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:01 np0005478302 python3.9[69724]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:02 np0005478302 python3.9[69876]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:02 np0005478302 python3.9[70028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:03 np0005478302 python3.9[70151]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002262.470066-608-279597778123685/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:04 np0005478302 python3.9[70303]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  9 05:31:04 np0005478302 systemd[1]: Starting Time & Date Service...
Oct  9 05:31:04 np0005478302 systemd[1]: Started Time & Date Service.
Oct  9 05:31:05 np0005478302 python3.9[70459]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:05 np0005478302 python3.9[70611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:05 np0005478302 python3.9[70734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002265.2212472-713-224186473516121/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:06 np0005478302 python3.9[70886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:06 np0005478302 python3.9[71009]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002266.1219974-758-39527974996604/.source.yaml _original_basename=.ytw3vt68 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:07 np0005478302 python3.9[71161]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:07 np0005478302 python3.9[71284]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002267.095801-803-173030882349578/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:08 np0005478302 python3.9[71436]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:09 np0005478302 python3.9[71589]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:09 np0005478302 python3[71742]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 05:31:10 np0005478302 python3.9[71894]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:10 np0005478302 python3.9[72017]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002269.9386172-920-186769010790131/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:11 np0005478302 python3.9[72169]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:11 np0005478302 python3.9[72292]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002270.869788-965-34214081859271/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:12 np0005478302 python3.9[72444]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:12 np0005478302 python3.9[72567]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002271.7625308-1010-207662511690553/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:12 np0005478302 python3.9[72719]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:13 np0005478302 python3.9[72842]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002272.641903-1055-73315930830393/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:13 np0005478302 python3.9[72994]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:14 np0005478302 python3.9[73117]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002273.5186462-1100-154883510369796/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:14 np0005478302 python3.9[73269]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:15 np0005478302 python3.9[73421]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:16 np0005478302 python3.9[73580]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:16 np0005478302 python3.9[73733]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:17 np0005478302 python3.9[73885]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:17 np0005478302 python3.9[74037]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 05:31:18 np0005478302 python3.9[74190]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 05:31:18 np0005478302 systemd[1]: session-15.scope: Deactivated successfully.
Oct  9 05:31:18 np0005478302 systemd[1]: session-15.scope: Consumed 21.254s CPU time.
Oct  9 05:31:18 np0005478302 systemd-logind[745]: Session 15 logged out. Waiting for processes to exit.
Oct  9 05:31:18 np0005478302 systemd-logind[745]: Removed session 15.
Oct  9 05:31:23 np0005478302 systemd-logind[745]: New session 16 of user zuul.
Oct  9 05:31:23 np0005478302 systemd[1]: Started Session 16 of User zuul.
Oct  9 05:31:23 np0005478302 python3.9[74371]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  9 05:31:24 np0005478302 python3.9[74523]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:25 np0005478302 python3.9[74675]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:25 np0005478302 python3.9[74827]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKE7qnQSdbsdsOaGWRokEAHfuZHqF4BkfkIlbsIxi6+FzXfmziMPrsg1PoVUBFOzaP55y6aRtUEaXoCsB+KxPGXhHnh3IdEYTUa5EvJs6/mUlEqIwltt8CLNKUrDV6N38V1v5gaRPIAI5iTwtbap14q+0iDF8MVi8MPKlkqoL/+Z49sJ4HqR31EZpD4cWKso/dkKZQSuVQg+TgJ3bnUKIRYPDS7fjVuZpr0KMyU+v4wjBKXvles8lctvRXdfpY2/33XtBG2af+p/+5mg47b5ylWC3wISLO590WzC4X2T0Pv1a6I9O/Dt3V8xyTfzbqi4ia9/kwNBJg1GGqNBssdedHK3AZDOTSd9U+/C1R9oBDXZ7nSo3hIzMQvrm5DXkthix56gd3x9MrMMzc+wTlFtlm2XwpMg7PtdxMZK++rIfPVxzKXBBQsdDd0W3cbam616N/XERaDJKIUqnPe5sE1qhpaFt8aNtwg+buZpYK5ubLbuJZpASgSC6dIuDsEIk6Af8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtxusJG2g5S2RnWLxtcDjdiTuv+VWibld9MVjIgPUzn#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG1pQwHgci56FauRELJKl6O8ntBVH1APLVaVNPCodlG/V+A+h79tYrSqi3QKycc18niRc7Eiq8wWQ8VbX+OhkmY=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEdAe+aHzafP9dhAtdIAtOm2sC12803SCpA/3rl1ydGqAiReivZh0j/TO2wBzoqsan7nzM7eG4TWSpqK+0ZBgBjrUjB9Cj1eCLSLOLFpIUpLcs70zpiXFEg4VCxifit+r7hVmAjbLpb7lUOEBeuKAC+NijlzOD2XrC+yd3AhBkIuX/kEOqNS457QburXRcER973lXO7bXpB0owCrgGAzOsy1i7FT6Zz4mSB7l2Iy2drh0BXBPs+laJ9chzaIYm3t6/xdGegDzZd9R0R/aKxaO2CGff8by/bJ8Ga/DZNziOBiuIImaU3kBJc76SWraZeoiOMwDTosKuZfFadJWywRHIP1xUSkKdLGnB0MzpGtOhcIWX642g/WIM4+Y078U5nwtvOcNHpA/uT9uRc7nBCEzPpJVHtyVbh0kQ9x86pCj83Ph6ZZ1RPGolhJ6oztdGyl5QMj/rkG45+H83p9c18d5vzsZzrcKaYtBEg3BJ80PfCqFw5Al9hHq/55Yd0D5PiK8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN+sxaZ1V99vc+E5ar8KEv4Hqy68kJM/buHn1/XxovLr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDc5CVbyus+PfQGnwFQkfkACIJgIJPRc/fJ1ooz9D/2T/S79sUKftWyZ1JOurJ8lQdLc+LgRGezTzhfuY3R3F6E=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCow+01n6Hl7e4y/xRpTIYbwm1BUam3jmz5ScpeEvosFn7TfszdHV/Do5gTioKon9F6x7Kn2fhkWobIt7rTveNaK0lE2p35tJDQJQ5zYJD3N4aWHdvfaigYEXYaH3OOpmqEhRw/IyxGzW1MS8OfGUNyziUYt99LLYhcEkDneuZnPOI2444OzzU0pYxCtaVSevz9aDR2yi9BWKNIP8iMTNqu9UpE9IaOANEDrZu7gbGMBTDiR1lYzo1peJrtAa/cpTF9DoFnddTbpOMLjd6HaRrnifcc9fP1YtxWn8T1ldTjecUUCp2yo6ycdOUdBiJG9yWw1gI7SXYjeHJbX/1QS6HWd5DWxJFbSf0zP5d5BWyDf5+TFu1/gImUA0HT8WOYb4tm1QH1NAThcRLvtUFg32CcbqOnUyAxW0wDeGoLCW7EERN9OKr11fwlYjdyW/TbqYWRn0J2WhZa4OoZ/C4m9ug6PP7SEo9wXLqN9t4eArVkbeTemzPigVRqNrD2eywEU4k=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCkglmiqZQwqqMItgWA6O04td1K/U4vAgm36NE9rj3U#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLD7v/1C4ThvDcQi8c4DTsjkszkaGHBX0ZNWy5MwKVH3Qt7bVSlXkD8SB3/nhOUlBIzdAK/JQpzVyqfy+61YZMk=#012 create=True mode=0644 path=/tmp/ansible.tsgtncb6 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:26 np0005478302 python3.9[74979]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tsgtncb6' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:27 np0005478302 python3.9[75133]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tsgtncb6 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:27 np0005478302 systemd[1]: session-16.scope: Deactivated successfully.
Oct  9 05:31:27 np0005478302 systemd[1]: session-16.scope: Consumed 2.347s CPU time.
Oct  9 05:31:27 np0005478302 systemd-logind[745]: Session 16 logged out. Waiting for processes to exit.
Oct  9 05:31:27 np0005478302 systemd-logind[745]: Removed session 16.
Oct  9 05:31:32 np0005478302 systemd-logind[745]: New session 17 of user zuul.
Oct  9 05:31:32 np0005478302 systemd[1]: Started Session 17 of User zuul.
Oct  9 05:31:33 np0005478302 python3.9[75311]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:34 np0005478302 python3.9[75467]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  9 05:31:34 np0005478302 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 05:31:34 np0005478302 python3.9[75621]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:31:35 np0005478302 python3.9[75776]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:35 np0005478302 python3.9[75929]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:36 np0005478302 python3.9[76083]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:37 np0005478302 python3.9[76238]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:37 np0005478302 systemd[1]: session-17.scope: Deactivated successfully.
Oct  9 05:31:37 np0005478302 systemd[1]: session-17.scope: Consumed 3.126s CPU time.
Oct  9 05:31:37 np0005478302 systemd-logind[745]: Session 17 logged out. Waiting for processes to exit.
Oct  9 05:31:37 np0005478302 systemd-logind[745]: Removed session 17.
Oct  9 05:31:42 np0005478302 systemd-logind[745]: New session 18 of user zuul.
Oct  9 05:31:42 np0005478302 systemd[1]: Started Session 18 of User zuul.
Oct  9 05:31:43 np0005478302 python3.9[76416]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:43 np0005478302 chronyd[61102]: Selected source 198.137.202.32 (pool.ntp.org)
Oct  9 05:31:44 np0005478302 python3.9[76572]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:31:44 np0005478302 python3.9[76656]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 05:31:46 np0005478302 python3.9[76807]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:47 np0005478302 python3.9[76960]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:47 np0005478302 python3.9[77112]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:48 np0005478302 python3.9[77264]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Core libraries or services have been updated since boot-up:#012  * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:48 np0005478302 python3.9[77414]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 05:31:49 np0005478302 python3.9[77564]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:49 np0005478302 python3.9[77714]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:50 np0005478302 python3.9[77866]: ansible-ansible.legacy.setup Invoked with gather_subset=['min'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:50 np0005478302 python3.9[77979]: ansible-ansible.legacy.find Invoked with paths=['/sbin', '/bin', '/usr/sbin', '/usr/bin', '/usr/local/sbin'] patterns=['shutdown'] file_type=any read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:31:56 compute-0 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  9 09:31:56 compute-0 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 09:31:56 compute-0 kernel: BIOS-provided physical RAM map:
Oct  9 09:31:56 compute-0 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  9 09:31:56 compute-0 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  9 09:31:56 compute-0 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  9 09:31:56 compute-0 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Oct  9 09:31:56 compute-0 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Oct  9 09:31:56 compute-0 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Oct  9 09:31:56 compute-0 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Oct  9 09:31:56 compute-0 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  9 09:31:56 compute-0 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  9 09:31:56 compute-0 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Oct  9 09:31:56 compute-0 kernel: NX (Execute Disable) protection: active
Oct  9 09:31:56 compute-0 kernel: APIC: Static calls initialized
Oct  9 09:31:56 compute-0 kernel: SMBIOS 2.8 present.
Oct  9 09:31:56 compute-0 kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Oct  9 09:31:56 compute-0 kernel: Hypervisor detected: KVM
Oct  9 09:31:56 compute-0 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  9 09:31:56 compute-0 kernel: kvm-clock: using sched offset of 1896538437969 cycles
Oct  9 09:31:56 compute-0 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  9 09:31:56 compute-0 kernel: tsc: Detected 2445.406 MHz processor
Oct  9 09:31:56 compute-0 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Oct  9 09:31:56 compute-0 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  9 09:31:56 compute-0 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  9 09:31:56 compute-0 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Oct  9 09:31:56 compute-0 kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Oct  9 09:31:56 compute-0 kernel: Using GB pages for direct mapping
Oct  9 09:31:56 compute-0 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  9 09:31:56 compute-0 kernel: ACPI: Early table checksum verification disabled
Oct  9 09:31:56 compute-0 kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Oct  9 09:31:56 compute-0 kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-0 kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-0 kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-0 kernel: ACPI: FACS 0x000000007FFDFC80 000040
Oct  9 09:31:56 compute-0 kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-0 kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-0 kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-0 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Oct  9 09:31:56 compute-0 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Oct  9 09:31:56 compute-0 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Oct  9 09:31:56 compute-0 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Oct  9 09:31:56 compute-0 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Oct  9 09:31:56 compute-0 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Oct  9 09:31:56 compute-0 kernel: No NUMA configuration found
Oct  9 09:31:56 compute-0 kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Oct  9 09:31:56 compute-0 kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Oct  9 09:31:56 compute-0 kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Oct  9 09:31:56 compute-0 kernel: Zone ranges:
Oct  9 09:31:56 compute-0 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  9 09:31:56 compute-0 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  9 09:31:56 compute-0 kernel:  Normal   [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 09:31:56 compute-0 kernel:  Device   empty
Oct  9 09:31:56 compute-0 kernel: Movable zone start for each node
Oct  9 09:31:56 compute-0 kernel: Early memory node ranges
Oct  9 09:31:56 compute-0 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  9 09:31:56 compute-0 kernel:  node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Oct  9 09:31:56 compute-0 kernel:  node   0: [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 09:31:56 compute-0 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Oct  9 09:31:56 compute-0 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  9 09:31:56 compute-0 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  9 09:31:56 compute-0 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  9 09:31:56 compute-0 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  9 09:31:56 compute-0 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  9 09:31:56 compute-0 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  9 09:31:56 compute-0 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  9 09:31:56 compute-0 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  9 09:31:56 compute-0 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  9 09:31:56 compute-0 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  9 09:31:56 compute-0 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  9 09:31:56 compute-0 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  9 09:31:56 compute-0 kernel: TSC deadline timer available
Oct  9 09:31:56 compute-0 kernel: CPU topo: Max. logical packages:   4
Oct  9 09:31:56 compute-0 kernel: CPU topo: Max. logical dies:       4
Oct  9 09:31:56 compute-0 kernel: CPU topo: Max. dies per package:   1
Oct  9 09:31:56 compute-0 kernel: CPU topo: Max. threads per core:   1
Oct  9 09:31:56 compute-0 kernel: CPU topo: Num. cores per package:     1
Oct  9 09:31:56 compute-0 kernel: CPU topo: Num. threads per package:   1
Oct  9 09:31:56 compute-0 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Oct  9 09:31:56 compute-0 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  9 09:31:56 compute-0 kernel: kvm-guest: KVM setup pv remote TLB flush
Oct  9 09:31:56 compute-0 kernel: kvm-guest: setup PV sched yield
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  9 09:31:56 compute-0 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  9 09:31:56 compute-0 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Oct  9 09:31:56 compute-0 kernel: Booting paravirtualized kernel on KVM
Oct  9 09:31:56 compute-0 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  9 09:31:56 compute-0 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Oct  9 09:31:56 compute-0 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Oct  9 09:31:56 compute-0 kernel: kvm-guest: PV spinlocks enabled
Oct  9 09:31:56 compute-0 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 09:31:56 compute-0 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  9 09:31:56 compute-0 kernel: random: crng init done
Oct  9 09:31:56 compute-0 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: Fallback order for Node 0: 0 
Oct  9 09:31:56 compute-0 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  9 09:31:56 compute-0 kernel: Policy zone: Normal
Oct  9 09:31:56 compute-0 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  9 09:31:56 compute-0 kernel: software IO TLB: area num 4.
Oct  9 09:31:56 compute-0 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Oct  9 09:31:56 compute-0 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  9 09:31:56 compute-0 kernel: ftrace: allocated 193 pages with 3 groups
Oct  9 09:31:56 compute-0 kernel: Dynamic Preempt: voluntary
Oct  9 09:31:56 compute-0 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  9 09:31:56 compute-0 kernel: rcu: #011RCU event tracing is enabled.
Oct  9 09:31:56 compute-0 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Oct  9 09:31:56 compute-0 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  9 09:31:56 compute-0 kernel: #011Rude variant of Tasks RCU enabled.
Oct  9 09:31:56 compute-0 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  9 09:31:56 compute-0 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  9 09:31:56 compute-0 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Oct  9 09:31:56 compute-0 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 09:31:56 compute-0 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 09:31:56 compute-0 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 09:31:56 compute-0 kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Oct  9 09:31:56 compute-0 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  9 09:31:56 compute-0 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  9 09:31:56 compute-0 kernel: Console: colour VGA+ 80x25
Oct  9 09:31:56 compute-0 kernel: printk: console [ttyS0] enabled
Oct  9 09:31:56 compute-0 kernel: ACPI: Core revision 20230331
Oct  9 09:31:56 compute-0 kernel: APIC: Switch to symmetric I/O mode setup
Oct  9 09:31:56 compute-0 kernel: x2apic enabled
Oct  9 09:31:56 compute-0 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  9 09:31:56 compute-0 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Oct  9 09:31:56 compute-0 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Oct  9 09:31:56 compute-0 kernel: kvm-guest: setup PV IPIs
Oct  9 09:31:56 compute-0 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  9 09:31:56 compute-0 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Oct  9 09:31:56 compute-0 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  9 09:31:56 compute-0 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  9 09:31:56 compute-0 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  9 09:31:56 compute-0 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  9 09:31:56 compute-0 kernel: Spectre V2 : Mitigation: Retpolines
Oct  9 09:31:56 compute-0 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  9 09:31:56 compute-0 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Oct  9 09:31:56 compute-0 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  9 09:31:56 compute-0 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  9 09:31:56 compute-0 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  9 09:31:56 compute-0 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  9 09:31:56 compute-0 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  9 09:31:56 compute-0 kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Oct  9 09:31:56 compute-0 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  9 09:31:56 compute-0 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  9 09:31:56 compute-0 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  9 09:31:56 compute-0 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Oct  9 09:31:56 compute-0 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  9 09:31:56 compute-0 kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Oct  9 09:31:56 compute-0 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Oct  9 09:31:56 compute-0 kernel: Freeing SMP alternatives memory: 40K
Oct  9 09:31:56 compute-0 kernel: pid_max: default: 32768 minimum: 301
Oct  9 09:31:56 compute-0 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  9 09:31:56 compute-0 kernel: landlock: Up and running.
Oct  9 09:31:56 compute-0 kernel: Yama: becoming mindful.
Oct  9 09:31:56 compute-0 kernel: SELinux:  Initializing.
Oct  9 09:31:56 compute-0 kernel: LSM support for eBPF active
Oct  9 09:31:56 compute-0 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Oct  9 09:31:56 compute-0 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  9 09:31:56 compute-0 kernel: ... version:                0
Oct  9 09:31:56 compute-0 kernel: ... bit width:              48
Oct  9 09:31:56 compute-0 kernel: ... generic registers:      6
Oct  9 09:31:56 compute-0 kernel: ... value mask:             0000ffffffffffff
Oct  9 09:31:56 compute-0 kernel: ... max period:             00007fffffffffff
Oct  9 09:31:56 compute-0 kernel: ... fixed-purpose events:   0
Oct  9 09:31:56 compute-0 kernel: ... event mask:             000000000000003f
Oct  9 09:31:56 compute-0 kernel: signal: max sigframe size: 3376
Oct  9 09:31:56 compute-0 kernel: rcu: Hierarchical SRCU implementation.
Oct  9 09:31:56 compute-0 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  9 09:31:56 compute-0 kernel: smp: Bringing up secondary CPUs ...
Oct  9 09:31:56 compute-0 kernel: smpboot: x86: Booting SMP configuration:
Oct  9 09:31:56 compute-0 kernel: .... node  #0, CPUs:      #1 #2 #3
Oct  9 09:31:56 compute-0 kernel: smp: Brought up 1 node, 4 CPUs
Oct  9 09:31:56 compute-0 kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Oct  9 09:31:56 compute-0 kernel: node 0 deferred pages initialised in 17ms
Oct  9 09:31:56 compute-0 kernel: Memory: 7767908K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 615456K reserved, 0K cma-reserved)
Oct  9 09:31:56 compute-0 kernel: devtmpfs: initialized
Oct  9 09:31:56 compute-0 kernel: x86/mm: Memory block size: 128MB
Oct  9 09:31:56 compute-0 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  9 09:31:56 compute-0 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: pinctrl core: initialized pinctrl subsystem
Oct  9 09:31:56 compute-0 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  9 09:31:56 compute-0 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  9 09:31:56 compute-0 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  9 09:31:56 compute-0 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  9 09:31:56 compute-0 kernel: audit: initializing netlink subsys (disabled)
Oct  9 09:31:56 compute-0 kernel: audit: type=2000 audit(1760002315.472:1): state=initialized audit_enabled=0 res=1
Oct  9 09:31:56 compute-0 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  9 09:31:56 compute-0 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  9 09:31:56 compute-0 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  9 09:31:56 compute-0 kernel: cpuidle: using governor menu
Oct  9 09:31:56 compute-0 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  9 09:31:56 compute-0 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Oct  9 09:31:56 compute-0 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Oct  9 09:31:56 compute-0 kernel: PCI: Using configuration type 1 for base access
Oct  9 09:31:56 compute-0 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  9 09:31:56 compute-0 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  9 09:31:56 compute-0 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  9 09:31:56 compute-0 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  9 09:31:56 compute-0 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  9 09:31:56 compute-0 kernel: Demotion targets for Node 0: null
Oct  9 09:31:56 compute-0 kernel: cryptd: max_cpu_qlen set to 1000
Oct  9 09:31:56 compute-0 kernel: ACPI: Added _OSI(Module Device)
Oct  9 09:31:56 compute-0 kernel: ACPI: Added _OSI(Processor Device)
Oct  9 09:31:56 compute-0 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  9 09:31:56 compute-0 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  9 09:31:56 compute-0 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  9 09:31:56 compute-0 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  9 09:31:56 compute-0 kernel: ACPI: Interpreter enabled
Oct  9 09:31:56 compute-0 kernel: ACPI: PM: (supports S0 S5)
Oct  9 09:31:56 compute-0 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  9 09:31:56 compute-0 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  9 09:31:56 compute-0 kernel: PCI: Using E820 reservations for host bridge windows
Oct  9 09:31:56 compute-0 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  9 09:31:56 compute-0 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  9 09:31:56 compute-0 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Oct  9 09:31:56 compute-0 kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Oct  9 09:31:56 compute-0 kernel: PCI host bridge to bus 0000:00
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:02: extended config space not accessible
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [1] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [2] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [3] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [4] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [5] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [6] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [7] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [8] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [9] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [10] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [11] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [12] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [13] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [14] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [15] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [16] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [17] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [18] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [19] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [20] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [21] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [22] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [23] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [24] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [25] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [26] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [27] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [28] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [29] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [30] registered
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [31] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-2] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-3] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-4] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-5] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-6] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 09:31:56 compute-0 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-7] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-8] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-9] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-10] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-11] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-12] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-13] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-14] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-15] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-16] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 09:31:56 compute-0 kernel: acpiphp: Slot [0-17] registered
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Oct  9 09:31:56 compute-0 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Oct  9 09:31:56 compute-0 kernel: iommu: Default domain type: Translated
Oct  9 09:31:56 compute-0 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  9 09:31:56 compute-0 kernel: SCSI subsystem initialized
Oct  9 09:31:56 compute-0 kernel: ACPI: bus type USB registered
Oct  9 09:31:56 compute-0 kernel: usbcore: registered new interface driver usbfs
Oct  9 09:31:56 compute-0 kernel: usbcore: registered new interface driver hub
Oct  9 09:31:56 compute-0 kernel: usbcore: registered new device driver usb
Oct  9 09:31:56 compute-0 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  9 09:31:56 compute-0 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  9 09:31:56 compute-0 kernel: PTP clock support registered
Oct  9 09:31:56 compute-0 kernel: EDAC MC: Ver: 3.0.0
Oct  9 09:31:56 compute-0 kernel: NetLabel: Initializing
Oct  9 09:31:56 compute-0 kernel: NetLabel:  domain hash size = 128
Oct  9 09:31:56 compute-0 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  9 09:31:56 compute-0 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  9 09:31:56 compute-0 kernel: PCI: Using ACPI for IRQ routing
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  9 09:31:56 compute-0 kernel: vgaarb: loaded
Oct  9 09:31:56 compute-0 kernel: clocksource: Switched to clocksource kvm-clock
Oct  9 09:31:56 compute-0 kernel: VFS: Disk quotas dquot_6.6.0
Oct  9 09:31:56 compute-0 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  9 09:31:56 compute-0 kernel: pnp: PnP ACPI init
Oct  9 09:31:56 compute-0 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Oct  9 09:31:56 compute-0 kernel: pnp: PnP ACPI: found 5 devices
Oct  9 09:31:56 compute-0 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  9 09:31:56 compute-0 kernel: NET: Registered PF_INET protocol family
Oct  9 09:31:56 compute-0 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  9 09:31:56 compute-0 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 09:31:56 compute-0 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  9 09:31:56 compute-0 kernel: NET: Registered PF_XDP protocol family
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 09:31:56 compute-0 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Oct  9 09:31:56 compute-0 kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 09:31:56 compute-0 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Oct  9 09:31:56 compute-0 kernel: PCI: CLS 0 bytes, default 64
Oct  9 09:31:56 compute-0 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  9 09:31:56 compute-0 kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Oct  9 09:31:56 compute-0 kernel: Trying to unpack rootfs image as initramfs...
Oct  9 09:31:56 compute-0 kernel: ACPI: bus type thunderbolt registered
Oct  9 09:31:56 compute-0 kernel: Initialise system trusted keyrings
Oct  9 09:31:56 compute-0 kernel: Key type blacklist registered
Oct  9 09:31:56 compute-0 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  9 09:31:56 compute-0 kernel: zbud: loaded
Oct  9 09:31:56 compute-0 kernel: integrity: Platform Keyring initialized
Oct  9 09:31:56 compute-0 kernel: integrity: Machine keyring initialized
Oct  9 09:31:56 compute-0 kernel: Freeing initrd memory: 86104K
Oct  9 09:31:56 compute-0 kernel: NET: Registered PF_ALG protocol family
Oct  9 09:31:56 compute-0 kernel: xor: automatically using best checksumming function   avx       
Oct  9 09:31:56 compute-0 kernel: Key type asymmetric registered
Oct  9 09:31:56 compute-0 kernel: Asymmetric key parser 'x509' registered
Oct  9 09:31:56 compute-0 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  9 09:31:56 compute-0 kernel: io scheduler mq-deadline registered
Oct  9 09:31:56 compute-0 kernel: io scheduler kyber registered
Oct  9 09:31:56 compute-0 kernel: io scheduler bfq registered
Oct  9 09:31:56 compute-0 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Oct  9 09:31:56 compute-0 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Oct  9 09:31:56 compute-0 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Oct  9 09:31:56 compute-0 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Oct  9 09:31:56 compute-0 kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Oct  9 09:31:56 compute-0 kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Oct  9 09:31:56 compute-0 kernel: shpchp 0000:01:00.0: Slot initialization failed
Oct  9 09:31:56 compute-0 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  9 09:31:56 compute-0 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  9 09:31:56 compute-0 kernel: ACPI: button: Power Button [PWRF]
Oct  9 09:31:56 compute-0 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Oct  9 09:31:56 compute-0 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  9 09:31:56 compute-0 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  9 09:31:56 compute-0 kernel: Non-volatile memory driver v1.3
Oct  9 09:31:56 compute-0 kernel: rdac: device handler registered
Oct  9 09:31:56 compute-0 kernel: hp_sw: device handler registered
Oct  9 09:31:56 compute-0 kernel: emc: device handler registered
Oct  9 09:31:56 compute-0 kernel: alua: device handler registered
Oct  9 09:31:56 compute-0 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Oct  9 09:31:56 compute-0 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Oct  9 09:31:56 compute-0 kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Oct  9 09:31:56 compute-0 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Oct  9 09:31:56 compute-0 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  9 09:31:56 compute-0 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  9 09:31:56 compute-0 kernel: usb usb1: Product: UHCI Host Controller
Oct  9 09:31:56 compute-0 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  9 09:31:56 compute-0 kernel: usb usb1: SerialNumber: 0000:02:01.0
Oct  9 09:31:56 compute-0 kernel: hub 1-0:1.0: USB hub found
Oct  9 09:31:56 compute-0 kernel: hub 1-0:1.0: 2 ports detected
Oct  9 09:31:56 compute-0 kernel: usbcore: registered new interface driver usbserial_generic
Oct  9 09:31:56 compute-0 kernel: usbserial: USB Serial support registered for generic
Oct  9 09:31:56 compute-0 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  9 09:31:56 compute-0 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  9 09:31:56 compute-0 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  9 09:31:56 compute-0 kernel: mousedev: PS/2 mouse device common for all mice
Oct  9 09:31:56 compute-0 kernel: rtc_cmos 00:03: RTC can wake from S4
Oct  9 09:31:56 compute-0 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  9 09:31:56 compute-0 kernel: rtc_cmos 00:03: registered as rtc0
Oct  9 09:31:56 compute-0 kernel: rtc_cmos 00:03: setting system clock to 2025-10-09T09:31:56 UTC (1760002316)
Oct  9 09:31:56 compute-0 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Oct  9 09:31:56 compute-0 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  9 09:31:56 compute-0 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  9 09:31:56 compute-0 kernel: usbcore: registered new interface driver usbhid
Oct  9 09:31:56 compute-0 kernel: usbhid: USB HID core driver
Oct  9 09:31:56 compute-0 kernel: drop_monitor: Initializing network drop monitor service
Oct  9 09:31:56 compute-0 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  9 09:31:56 compute-0 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  9 09:31:56 compute-0 kernel: Initializing XFRM netlink socket
Oct  9 09:31:56 compute-0 kernel: NET: Registered PF_INET6 protocol family
Oct  9 09:31:56 compute-0 kernel: Segment Routing with IPv6
Oct  9 09:31:56 compute-0 kernel: NET: Registered PF_PACKET protocol family
Oct  9 09:31:56 compute-0 kernel: mpls_gso: MPLS GSO support
Oct  9 09:31:56 compute-0 kernel: IPI shorthand broadcast: enabled
Oct  9 09:31:56 compute-0 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  9 09:31:56 compute-0 kernel: AES CTR mode by8 optimization enabled
Oct  9 09:31:56 compute-0 kernel: sched_clock: Marking stable (1097001823, 142169843)->(1306017947, -66846281)
Oct  9 09:31:56 compute-0 kernel: registered taskstats version 1
Oct  9 09:31:56 compute-0 kernel: Loading compiled-in X.509 certificates
Oct  9 09:31:56 compute-0 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 09:31:56 compute-0 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  9 09:31:56 compute-0 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  9 09:31:56 compute-0 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  9 09:31:56 compute-0 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  9 09:31:56 compute-0 kernel: Demotion targets for Node 0: null
Oct  9 09:31:56 compute-0 kernel: page_owner is disabled
Oct  9 09:31:56 compute-0 kernel: Key type .fscrypt registered
Oct  9 09:31:56 compute-0 kernel: Key type fscrypt-provisioning registered
Oct  9 09:31:56 compute-0 kernel: Key type big_key registered
Oct  9 09:31:56 compute-0 kernel: Key type encrypted registered
Oct  9 09:31:56 compute-0 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  9 09:31:56 compute-0 kernel: Loading compiled-in module X.509 certificates
Oct  9 09:31:56 compute-0 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 09:31:56 compute-0 kernel: ima: Allocated hash algorithm: sha256
Oct  9 09:31:56 compute-0 kernel: ima: No architecture policies found
Oct  9 09:31:56 compute-0 kernel: evm: Initialising EVM extended attributes:
Oct  9 09:31:56 compute-0 kernel: evm: security.selinux
Oct  9 09:31:56 compute-0 kernel: evm: security.SMACK64 (disabled)
Oct  9 09:31:56 compute-0 kernel: evm: security.SMACK64EXEC (disabled)
Oct  9 09:31:56 compute-0 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  9 09:31:56 compute-0 kernel: evm: security.SMACK64MMAP (disabled)
Oct  9 09:31:56 compute-0 kernel: evm: security.apparmor (disabled)
Oct  9 09:31:56 compute-0 kernel: evm: security.ima
Oct  9 09:31:56 compute-0 kernel: evm: security.capability
Oct  9 09:31:56 compute-0 kernel: evm: HMAC attrs: 0x1
Oct  9 09:31:56 compute-0 kernel: Running certificate verification RSA selftest
Oct  9 09:31:56 compute-0 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  9 09:31:56 compute-0 kernel: Running certificate verification ECDSA selftest
Oct  9 09:31:56 compute-0 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  9 09:31:56 compute-0 kernel: clk: Disabling unused clocks
Oct  9 09:31:56 compute-0 kernel: Freeing unused decrypted memory: 2028K
Oct  9 09:31:56 compute-0 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  9 09:31:56 compute-0 kernel: Write protecting the kernel read-only data: 30720k
Oct  9 09:31:56 compute-0 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  9 09:31:56 compute-0 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  9 09:31:56 compute-0 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  9 09:31:56 compute-0 kernel: Run /init as init process
Oct  9 09:31:56 compute-0 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 09:31:56 compute-0 systemd: Detected virtualization kvm.
Oct  9 09:31:56 compute-0 systemd: Detected architecture x86-64.
Oct  9 09:31:56 compute-0 systemd: Running in initrd.
Oct  9 09:31:56 compute-0 systemd: No hostname configured, using default hostname.
Oct  9 09:31:56 compute-0 systemd: Hostname set to <localhost>.
Oct  9 09:31:56 compute-0 systemd: Initializing machine ID from VM UUID.
Oct  9 09:31:56 compute-0 systemd: Queued start job for default target Initrd Default Target.
Oct  9 09:31:56 compute-0 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 09:31:56 compute-0 systemd: Reached target Local Encrypted Volumes.
Oct  9 09:31:56 compute-0 systemd: Reached target Initrd /usr File System.
Oct  9 09:31:56 compute-0 systemd: Reached target Local File Systems.
Oct  9 09:31:56 compute-0 systemd: Reached target Path Units.
Oct  9 09:31:56 compute-0 systemd: Reached target Slice Units.
Oct  9 09:31:56 compute-0 systemd: Reached target Swaps.
Oct  9 09:31:56 compute-0 systemd: Reached target Timer Units.
Oct  9 09:31:56 compute-0 systemd: Listening on D-Bus System Message Bus Socket.
Oct  9 09:31:56 compute-0 systemd: Listening on Journal Socket (/dev/log).
Oct  9 09:31:56 compute-0 systemd: Listening on Journal Socket.
Oct  9 09:31:56 compute-0 systemd: Listening on udev Control Socket.
Oct  9 09:31:56 compute-0 systemd: Listening on udev Kernel Socket.
Oct  9 09:31:56 compute-0 systemd: Reached target Socket Units.
Oct  9 09:31:56 compute-0 systemd: Starting Create List of Static Device Nodes...
Oct  9 09:31:56 compute-0 systemd: Starting Journal Service...
Oct  9 09:31:56 compute-0 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 09:31:56 compute-0 systemd: Starting Apply Kernel Variables...
Oct  9 09:31:56 compute-0 systemd: Starting Create System Users...
Oct  9 09:31:56 compute-0 systemd: Starting Setup Virtual Console...
Oct  9 09:31:56 compute-0 systemd: Finished Create List of Static Device Nodes.
Oct  9 09:31:56 compute-0 systemd: Finished Apply Kernel Variables.
Oct  9 09:31:56 compute-0 systemd: Finished Create System Users.
Oct  9 09:31:56 compute-0 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  9 09:31:56 compute-0 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  9 09:31:56 compute-0 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  9 09:31:56 compute-0 kernel: usb 1-1: Manufacturer: QEMU
Oct  9 09:31:56 compute-0 kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Oct  9 09:31:56 compute-0 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  9 09:31:56 compute-0 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Oct  9 09:31:56 compute-0 systemd-journald[282]: Journal started
Oct  9 09:31:56 compute-0 systemd-journald[282]: Runtime Journal (/run/log/journal/c2ce88da801c421fa8d632aab8dfbba9) is 8.0M, max 153.6M, 145.6M free.
Oct  9 09:31:56 compute-0 systemd-sysusers[285]: Creating group 'users' with GID 100.
Oct  9 09:31:56 compute-0 systemd-sysusers[285]: Creating group 'dbus' with GID 81.
Oct  9 09:31:56 compute-0 systemd-sysusers[285]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  9 09:31:56 compute-0 systemd: Started Journal Service.
Oct  9 09:31:56 compute-0 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 09:31:56 compute-0 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 09:31:57 compute-0 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 09:31:57 compute-0 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 09:31:57 compute-0 systemd[1]: Finished Setup Virtual Console.
Oct  9 09:31:57 compute-0 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  9 09:31:57 compute-0 systemd[1]: Starting dracut cmdline hook...
Oct  9 09:31:57 compute-0 dracut-cmdline[300]: dracut-9 dracut-057-102.git20250818.el9
Oct  9 09:31:57 compute-0 dracut-cmdline[300]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 09:31:57 compute-0 systemd[1]: Finished dracut cmdline hook.
Oct  9 09:31:57 compute-0 systemd[1]: Starting dracut pre-udev hook...
Oct  9 09:31:57 compute-0 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  9 09:31:57 compute-0 kernel: device-mapper: uevent: version 1.0.3
Oct  9 09:31:57 compute-0 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  9 09:31:57 compute-0 kernel: RPC: Registered named UNIX socket transport module.
Oct  9 09:31:57 compute-0 kernel: RPC: Registered udp transport module.
Oct  9 09:31:57 compute-0 kernel: RPC: Registered tcp transport module.
Oct  9 09:31:57 compute-0 kernel: RPC: Registered tcp-with-tls transport module.
Oct  9 09:31:57 compute-0 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  9 09:31:57 compute-0 rpc.statd[417]: Version 2.5.4 starting
Oct  9 09:31:57 compute-0 rpc.statd[417]: Initializing NSM state
Oct  9 09:31:57 compute-0 rpc.idmapd[422]: Setting log level to 0
Oct  9 09:31:57 compute-0 systemd[1]: Finished dracut pre-udev hook.
Oct  9 09:31:57 compute-0 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 09:31:57 compute-0 systemd-udevd[435]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 09:31:57 compute-0 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 09:31:57 compute-0 systemd[1]: Starting dracut pre-trigger hook...
Oct  9 09:31:57 compute-0 systemd[1]: Finished dracut pre-trigger hook.
Oct  9 09:31:57 compute-0 systemd[1]: Starting Coldplug All udev Devices...
Oct  9 09:31:57 compute-0 systemd[1]: Created slice Slice /system/modprobe.
Oct  9 09:31:57 compute-0 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 09:31:57 compute-0 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 09:31:57 compute-0 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 09:31:57 compute-0 systemd[1]: Reached target Network.
Oct  9 09:31:57 compute-0 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 09:31:57 compute-0 systemd[1]: Starting dracut initqueue hook...
Oct  9 09:31:57 compute-0 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 09:31:57 compute-0 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 09:31:57 compute-0 kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Oct  9 09:31:57 compute-0 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  9 09:31:57 compute-0 kernel: vda: vda1
Oct  9 09:31:57 compute-0 systemd-udevd[440]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:31:57 compute-0 systemd-udevd[467]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:31:57 compute-0 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Oct  9 09:31:57 compute-0 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Oct  9 09:31:57 compute-0 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Oct  9 09:31:57 compute-0 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Oct  9 09:31:57 compute-0 kernel: scsi host0: ahci
Oct  9 09:31:57 compute-0 kernel: scsi host1: ahci
Oct  9 09:31:57 compute-0 kernel: scsi host2: ahci
Oct  9 09:31:57 compute-0 kernel: scsi host3: ahci
Oct  9 09:31:57 compute-0 kernel: scsi host4: ahci
Oct  9 09:31:57 compute-0 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 09:31:57 compute-0 kernel: scsi host5: ahci
Oct  9 09:31:57 compute-0 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-0 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-0 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-0 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-0 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-0 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-0 systemd[1]: Reached target Initrd Root Device.
Oct  9 09:31:57 compute-0 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:57 compute-0 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:57 compute-0 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:57 compute-0 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:57 compute-0 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:57 compute-0 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Oct  9 09:31:57 compute-0 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  9 09:31:57 compute-0 kernel: ata1.00: applying bridge limits
Oct  9 09:31:57 compute-0 kernel: ata1.00: configured for UDMA/100
Oct  9 09:31:57 compute-0 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  9 09:31:57 compute-0 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  9 09:31:57 compute-0 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  9 09:31:57 compute-0 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  9 09:31:57 compute-0 systemd[1]: Mounting Kernel Configuration File System...
Oct  9 09:31:57 compute-0 systemd[1]: Mounted Kernel Configuration File System.
Oct  9 09:31:57 compute-0 systemd[1]: Reached target System Initialization.
Oct  9 09:31:57 compute-0 systemd[1]: Reached target Basic System.
Oct  9 09:31:57 compute-0 systemd[1]: Finished dracut initqueue hook.
Oct  9 09:31:57 compute-0 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 09:31:57 compute-0 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  9 09:31:57 compute-0 systemd[1]: Reached target Remote File Systems.
Oct  9 09:31:57 compute-0 systemd[1]: Starting dracut pre-mount hook...
Oct  9 09:31:57 compute-0 systemd[1]: Finished dracut pre-mount hook.
Oct  9 09:31:57 compute-0 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  9 09:31:57 compute-0 systemd-fsck[528]: /usr/sbin/fsck.xfs: XFS file system.
Oct  9 09:31:57 compute-0 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 09:31:57 compute-0 systemd[1]: Mounting /sysroot...
Oct  9 09:31:58 compute-0 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  9 09:31:58 compute-0 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  9 09:31:58 compute-0 kernel: XFS (vda1): Ending clean mount
Oct  9 09:31:58 compute-0 systemd[1]: Mounted /sysroot.
Oct  9 09:31:58 compute-0 systemd[1]: Reached target Initrd Root File System.
Oct  9 09:31:58 compute-0 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  9 09:31:58 compute-0 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  9 09:31:58 compute-0 systemd[1]: Reached target Initrd File Systems.
Oct  9 09:31:58 compute-0 systemd[1]: Reached target Initrd Default Target.
Oct  9 09:31:58 compute-0 systemd[1]: Starting dracut mount hook...
Oct  9 09:31:58 compute-0 systemd[1]: Finished dracut mount hook.
Oct  9 09:31:58 compute-0 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  9 09:31:58 compute-0 rpc.idmapd[422]: exiting on signal 15
Oct  9 09:31:58 compute-0 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  9 09:31:58 compute-0 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Network.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Timer Units.
Oct  9 09:31:58 compute-0 systemd[1]: dbus.socket: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  9 09:31:58 compute-0 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Initrd Default Target.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Basic System.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Initrd Root Device.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Initrd /usr File System.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Path Units.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Remote File Systems.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Slice Units.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Socket Units.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target System Initialization.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Local File Systems.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Swaps.
Oct  9 09:31:58 compute-0 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped dracut mount hook.
Oct  9 09:31:58 compute-0 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped dracut pre-mount hook.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  9 09:31:58 compute-0 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  9 09:31:58 compute-0 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped dracut initqueue hook.
Oct  9 09:31:58 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 09:31:58 compute-0 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  9 09:31:58 compute-0 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped Coldplug All udev Devices.
Oct  9 09:31:58 compute-0 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped dracut pre-trigger hook.
Oct  9 09:31:58 compute-0 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  9 09:31:58 compute-0 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped Setup Virtual Console.
Oct  9 09:31:58 compute-0 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  9 09:31:58 compute-0 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  9 09:31:58 compute-0 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Closed udev Control Socket.
Oct  9 09:31:58 compute-0 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Closed udev Kernel Socket.
Oct  9 09:31:58 compute-0 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped dracut pre-udev hook.
Oct  9 09:31:58 compute-0 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped dracut cmdline hook.
Oct  9 09:31:58 compute-0 systemd[1]: Starting Cleanup udev Database...
Oct  9 09:31:58 compute-0 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  9 09:31:58 compute-0 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  9 09:31:58 compute-0 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Stopped Create System Users.
Oct  9 09:31:58 compute-0 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  9 09:31:58 compute-0 systemd[1]: Finished Cleanup udev Database.
Oct  9 09:31:58 compute-0 systemd[1]: Reached target Switch Root.
Oct  9 09:31:58 compute-0 systemd[1]: Starting Switch Root...
Oct  9 09:31:58 compute-0 systemd[1]: Switching root.
Oct  9 09:31:58 compute-0 systemd-journald[282]: Journal stopped
Oct  9 09:31:59 compute-0 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  9 09:31:59 compute-0 kernel: audit: type=1404 audit(1760002318.645:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  9 09:31:59 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:31:59 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:31:59 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:31:59 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:31:59 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:31:59 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:31:59 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:31:59 compute-0 kernel: audit: type=1403 audit(1760002318.759:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  9 09:31:59 compute-0 systemd: Successfully loaded SELinux policy in 117.932ms.
Oct  9 09:31:59 compute-0 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.574ms.
Oct  9 09:31:59 compute-0 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 09:31:59 compute-0 systemd: Detected virtualization kvm.
Oct  9 09:31:59 compute-0 systemd: Detected architecture x86-64.
Oct  9 09:31:59 compute-0 systemd: Hostname set to <compute-0>.
Oct  9 09:31:59 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:31:59 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:31:59 compute-0 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  9 09:31:59 compute-0 systemd: Stopped Switch Root.
Oct  9 09:31:59 compute-0 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  9 09:31:59 compute-0 systemd: Created slice Slice /system/getty.
Oct  9 09:31:59 compute-0 systemd: Created slice Slice /system/serial-getty.
Oct  9 09:31:59 compute-0 systemd: Created slice Slice /system/sshd-keygen.
Oct  9 09:31:59 compute-0 systemd: Created slice User and Session Slice.
Oct  9 09:31:59 compute-0 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 09:31:59 compute-0 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  9 09:31:59 compute-0 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  9 09:31:59 compute-0 systemd: Reached target Local Encrypted Volumes.
Oct  9 09:31:59 compute-0 systemd: Stopped target Switch Root.
Oct  9 09:31:59 compute-0 systemd: Stopped target Initrd File Systems.
Oct  9 09:31:59 compute-0 systemd: Stopped target Initrd Root File System.
Oct  9 09:31:59 compute-0 systemd: Reached target Local Integrity Protected Volumes.
Oct  9 09:31:59 compute-0 systemd: Reached target Path Units.
Oct  9 09:31:59 compute-0 systemd: Reached target rpc_pipefs.target.
Oct  9 09:31:59 compute-0 systemd: Reached target Slice Units.
Oct  9 09:31:59 compute-0 systemd: Reached target Local Verity Protected Volumes.
Oct  9 09:31:59 compute-0 systemd: Listening on Device-mapper event daemon FIFOs.
Oct  9 09:31:59 compute-0 systemd: Listening on LVM2 poll daemon socket.
Oct  9 09:31:59 compute-0 systemd: Listening on RPCbind Server Activation Socket.
Oct  9 09:31:59 compute-0 systemd: Reached target RPC Port Mapper.
Oct  9 09:31:59 compute-0 systemd: Listening on Process Core Dump Socket.
Oct  9 09:31:59 compute-0 systemd: Listening on initctl Compatibility Named Pipe.
Oct  9 09:31:59 compute-0 systemd: Listening on udev Control Socket.
Oct  9 09:31:59 compute-0 systemd: Listening on udev Kernel Socket.
Oct  9 09:31:59 compute-0 systemd: Mounting Huge Pages File System...
Oct  9 09:31:59 compute-0 systemd: Mounting /dev/hugepages1G...
Oct  9 09:31:59 compute-0 systemd: Mounting /dev/hugepages2M...
Oct  9 09:31:59 compute-0 systemd: Mounting POSIX Message Queue File System...
Oct  9 09:31:59 compute-0 systemd: Mounting Kernel Debug File System...
Oct  9 09:31:59 compute-0 systemd: Mounting Kernel Trace File System...
Oct  9 09:31:59 compute-0 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 09:31:59 compute-0 systemd: Starting Create List of Static Device Nodes...
Oct  9 09:31:59 compute-0 systemd: Load legacy module configuration was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-0 systemd: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  9 09:31:59 compute-0 systemd: Starting Load Kernel Module configfs...
Oct  9 09:31:59 compute-0 systemd: Starting Load Kernel Module drm...
Oct  9 09:31:59 compute-0 systemd: Starting Load Kernel Module efi_pstore...
Oct  9 09:31:59 compute-0 systemd: Starting Load Kernel Module fuse...
Oct  9 09:31:59 compute-0 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  9 09:31:59 compute-0 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  9 09:31:59 compute-0 systemd: Stopped File System Check on Root Device.
Oct  9 09:31:59 compute-0 systemd: Stopped Journal Service.
Oct  9 09:31:59 compute-0 kernel: fuse: init (API version 7.37)
Oct  9 09:31:59 compute-0 systemd: Starting Journal Service...
Oct  9 09:31:59 compute-0 systemd: Starting Load Kernel Modules...
Oct  9 09:31:59 compute-0 systemd: Starting Generate network units from Kernel command line...
Oct  9 09:31:59 compute-0 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 09:31:59 compute-0 systemd: Starting Remount Root and Kernel File Systems...
Oct  9 09:31:59 compute-0 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-0 systemd: Starting Coldplug All udev Devices...
Oct  9 09:31:59 compute-0 systemd: Mounted Huge Pages File System.
Oct  9 09:31:59 compute-0 systemd: Mounted /dev/hugepages1G.
Oct  9 09:31:59 compute-0 systemd: Mounted /dev/hugepages2M.
Oct  9 09:31:59 compute-0 systemd: Mounted POSIX Message Queue File System.
Oct  9 09:31:59 compute-0 systemd: Mounted Kernel Debug File System.
Oct  9 09:31:59 compute-0 systemd: Mounted Kernel Trace File System.
Oct  9 09:31:59 compute-0 systemd: Finished Create List of Static Device Nodes.
Oct  9 09:31:59 compute-0 kernel: ACPI: bus type drm_connector registered
Oct  9 09:31:59 compute-0 systemd: modprobe@configfs.service: Deactivated successfully.
Oct  9 09:31:59 compute-0 systemd: Finished Load Kernel Module configfs.
Oct  9 09:31:59 compute-0 systemd: modprobe@drm.service: Deactivated successfully.
Oct  9 09:31:59 compute-0 systemd: Finished Load Kernel Module drm.
Oct  9 09:31:59 compute-0 systemd-journald[661]: Journal started
Oct  9 09:31:59 compute-0 systemd-journald[661]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 09:31:59 compute-0 systemd[1]: Queued start job for default target Multi-User System.
Oct  9 09:31:59 compute-0 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  9 09:31:59 compute-0 systemd: Started Journal Service.
Oct  9 09:31:59 compute-0 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  9 09:31:59 compute-0 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Load Kernel Module fuse.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Generate network units from Kernel command line.
Oct  9 09:31:59 compute-0 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  9 09:31:59 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  9 09:31:59 compute-0 kernel: Bridge firewalling registered
Oct  9 09:31:59 compute-0 systemd-modules-load[662]: Inserted module 'br_netfilter'
Oct  9 09:31:59 compute-0 systemd[1]: Mounting FUSE Control File System...
Oct  9 09:31:59 compute-0 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  9 09:31:59 compute-0 systemd[1]: Mounted FUSE Control File System.
Oct  9 09:31:59 compute-0 systemd[1]: Activating swap /swap...
Oct  9 09:31:59 compute-0 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 09:31:59 compute-0 systemd[1]: Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc).
Oct  9 09:31:59 compute-0 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  9 09:31:59 compute-0 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  9 09:31:59 compute-0 systemd[1]: Starting Load/Save OS Random Seed...
Oct  9 09:31:59 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  9 09:31:59 compute-0 systemd[1]: Create System Users was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-0 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 09:31:59 compute-0 systemd[1]: Activated swap /swap.
Oct  9 09:31:59 compute-0 systemd-journald[661]: Time spent on flushing to /var/log/journal/42833e1b511a402df82cb9cb2fc36491 is 9.318ms for 1156 entries.
Oct  9 09:31:59 compute-0 systemd-journald[661]: System Journal (/var/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 4.0G, 3.9G free.
Oct  9 09:31:59 compute-0 systemd-journald[661]: Received client request to flush runtime journal.
Oct  9 09:31:59 compute-0 systemd[1]: Reached target Swaps.
Oct  9 09:31:59 compute-0 systemd-modules-load[662]: Inserted module 'nf_conntrack'
Oct  9 09:31:59 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Load/Save OS Random Seed.
Oct  9 09:31:59 compute-0 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 09:31:59 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct  9 09:31:59 compute-0 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 09:31:59 compute-0 systemd[1]: Reached target Preparation for Local File Systems.
Oct  9 09:31:59 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  9 09:31:59 compute-0 systemd[1]: Reached target Local File Systems.
Oct  9 09:31:59 compute-0 systemd[1]: Starting Import network configuration from initramfs...
Oct  9 09:31:59 compute-0 systemd[1]: Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-0 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  9 09:31:59 compute-0 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-0 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-0 systemd[1]: Starting Automatic Boot Loader Update...
Oct  9 09:31:59 compute-0 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  9 09:31:59 compute-0 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 09:31:59 compute-0 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 09:31:59 compute-0 bootctl[677]: Couldn't find EFI system partition, skipping.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Automatic Boot Loader Update.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Import network configuration from initramfs.
Oct  9 09:31:59 compute-0 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 09:31:59 compute-0 systemd-udevd[679]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 09:31:59 compute-0 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 09:31:59 compute-0 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 09:31:59 compute-0 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 09:31:59 compute-0 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  9 09:31:59 compute-0 systemd-udevd[709]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:31:59 compute-0 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 09:31:59 compute-0 systemd[1]: Starting Security Auditing Service...
Oct  9 09:31:59 compute-0 systemd[1]: Starting RPC Bind...
Oct  9 09:31:59 compute-0 systemd[1]: Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var).
Oct  9 09:31:59 compute-0 systemd[1]: Update is Completed was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-0 auditd[734]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  9 09:31:59 compute-0 auditd[734]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  9 09:31:59 compute-0 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Oct  9 09:31:59 compute-0 systemd[1]: Started RPC Bind.
Oct  9 09:31:59 compute-0 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Oct  9 09:31:59 compute-0 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  9 09:31:59 compute-0 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  9 09:31:59 compute-0 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  9 09:31:59 compute-0 augenrules[739]: /sbin/augenrules: No change
Oct  9 09:31:59 compute-0 augenrules[754]: No rules
Oct  9 09:31:59 compute-0 augenrules[754]: enabled 1
Oct  9 09:31:59 compute-0 augenrules[754]: failure 1
Oct  9 09:31:59 compute-0 augenrules[754]: pid 734
Oct  9 09:31:59 compute-0 augenrules[754]: rate_limit 0
Oct  9 09:31:59 compute-0 augenrules[754]: backlog_limit 8192
Oct  9 09:31:59 compute-0 augenrules[754]: lost 0
Oct  9 09:31:59 compute-0 augenrules[754]: backlog 0
Oct  9 09:31:59 compute-0 augenrules[754]: backlog_wait_time 60000
Oct  9 09:31:59 compute-0 augenrules[754]: backlog_wait_time_actual 0
Oct  9 09:31:59 compute-0 augenrules[754]: enabled 1
Oct  9 09:31:59 compute-0 augenrules[754]: failure 1
Oct  9 09:31:59 compute-0 augenrules[754]: pid 734
Oct  9 09:31:59 compute-0 augenrules[754]: rate_limit 0
Oct  9 09:31:59 compute-0 augenrules[754]: backlog_limit 8192
Oct  9 09:31:59 compute-0 augenrules[754]: lost 0
Oct  9 09:31:59 compute-0 augenrules[754]: backlog 4
Oct  9 09:31:59 compute-0 augenrules[754]: backlog_wait_time 60000
Oct  9 09:31:59 compute-0 augenrules[754]: backlog_wait_time_actual 0
Oct  9 09:31:59 compute-0 augenrules[754]: enabled 1
Oct  9 09:31:59 compute-0 augenrules[754]: failure 1
Oct  9 09:31:59 compute-0 augenrules[754]: pid 734
Oct  9 09:31:59 compute-0 augenrules[754]: rate_limit 0
Oct  9 09:31:59 compute-0 augenrules[754]: backlog_limit 8192
Oct  9 09:31:59 compute-0 augenrules[754]: lost 0
Oct  9 09:31:59 compute-0 augenrules[754]: backlog 0
Oct  9 09:31:59 compute-0 augenrules[754]: backlog_wait_time 60000
Oct  9 09:31:59 compute-0 augenrules[754]: backlog_wait_time_actual 0
Oct  9 09:31:59 compute-0 systemd[1]: Started Security Auditing Service.
Oct  9 09:31:59 compute-0 systemd-udevd[703]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:31:59 compute-0 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  9 09:31:59 compute-0 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  9 09:31:59 compute-0 kernel: iTCO_vendor_support: vendor-support=0
Oct  9 09:31:59 compute-0 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Oct  9 09:31:59 compute-0 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Oct  9 09:31:59 compute-0 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Oct  9 09:31:59 compute-0 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Oct  9 09:31:59 compute-0 kernel: Console: switching to colour dummy device 80x25
Oct  9 09:31:59 compute-0 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  9 09:31:59 compute-0 kernel: [drm] features: -context_init
Oct  9 09:31:59 compute-0 kernel: [drm] number of scanouts: 1
Oct  9 09:31:59 compute-0 kernel: [drm] number of cap sets: 0
Oct  9 09:31:59 compute-0 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Oct  9 09:31:59 compute-0 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  9 09:31:59 compute-0 kernel: Console: switching to colour frame buffer device 160x50
Oct  9 09:31:59 compute-0 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  9 09:31:59 compute-0 kernel: kvm_amd: TSC scaling supported
Oct  9 09:31:59 compute-0 kernel: kvm_amd: Nested Virtualization enabled
Oct  9 09:31:59 compute-0 kernel: kvm_amd: Nested Paging enabled
Oct  9 09:31:59 compute-0 kernel: kvm_amd: LBR virtualization supported
Oct  9 09:31:59 compute-0 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Oct  9 09:31:59 compute-0 kernel: kvm_amd: Virtual GIF supported
Oct  9 09:32:00 compute-0 systemd[1]: Reached target System Initialization.
Oct  9 09:32:00 compute-0 systemd[1]: Started dnf makecache --timer.
Oct  9 09:32:00 compute-0 systemd[1]: Started Daily rotation of log files.
Oct  9 09:32:00 compute-0 systemd[1]: Started Run system activity accounting tool every 10 minutes.
Oct  9 09:32:00 compute-0 systemd[1]: Started Generate summary of yesterday's process accounting.
Oct  9 09:32:00 compute-0 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  9 09:32:00 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  9 09:32:00 compute-0 systemd[1]: Reached target Timer Units.
Oct  9 09:32:00 compute-0 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  9 09:32:00 compute-0 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  9 09:32:00 compute-0 systemd[1]: Reached target Socket Units.
Oct  9 09:32:00 compute-0 systemd[1]: Starting D-Bus System Message Bus...
Oct  9 09:32:00 compute-0 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 09:32:00 compute-0 systemd[1]: Started D-Bus System Message Bus.
Oct  9 09:32:00 compute-0 systemd[1]: Reached target Basic System.
Oct  9 09:32:00 compute-0 dbus-broker-lau[789]: Ready
Oct  9 09:32:00 compute-0 systemd[1]: Starting NTP client/server...
Oct  9 09:32:00 compute-0 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  9 09:32:00 compute-0 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  9 09:32:00 compute-0 systemd[1]: Started irqbalance daemon.
Oct  9 09:32:00 compute-0 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  9 09:32:00 compute-0 systemd[1]: Starting Create netns directory...
Oct  9 09:32:00 compute-0 systemd[1]: Starting Netfilter Tables...
Oct  9 09:32:00 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:32:00 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:32:00 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:32:00 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct  9 09:32:00 compute-0 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  9 09:32:00 compute-0 systemd[1]: Reached target User and Group Name Lookups.
Oct  9 09:32:00 compute-0 systemd[1]: Starting Resets System Activity Logs...
Oct  9 09:32:00 compute-0 systemd[1]: Starting User Login Management...
Oct  9 09:32:00 compute-0 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  9 09:32:00 compute-0 systemd[1]: Finished Resets System Activity Logs.
Oct  9 09:32:00 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:32:00 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:32:00 compute-0 systemd[1]: Finished Create netns directory.
Oct  9 09:32:00 compute-0 chronyd[804]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 09:32:00 compute-0 chronyd[804]: Frequency -10.397 +/- 0.260 ppm read from /var/lib/chrony/drift
Oct  9 09:32:00 compute-0 chronyd[804]: Loaded seccomp filter (level 2)
Oct  9 09:32:00 compute-0 systemd[1]: Started NTP client/server.
Oct  9 09:32:00 compute-0 systemd-logind[798]: New seat seat0.
Oct  9 09:32:00 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 09:32:00 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 09:32:00 compute-0 systemd[1]: Started User Login Management.
Oct  9 09:32:00 compute-0 systemd[1]: Finished Netfilter Tables.
Oct  9 09:32:00 compute-0 cloud-init[824]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 09:32:00 +0000. Up 5.23 seconds.
Oct  9 09:32:00 compute-0 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  9 09:32:00 compute-0 systemd[1]: Reached target Preparation for Network.
Oct  9 09:32:00 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct  9 09:32:00 compute-0 chown[826]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  9 09:32:00 compute-0 ovs-ctl[831]: Starting ovsdb-server [  OK  ]
Oct  9 09:32:00 compute-0 ovs-vsctl[880]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  9 09:32:01 compute-0 ovs-vsctl[890]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"ef217152-08e8-40c8-a663-3565c5b77d4a\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  9 09:32:01 compute-0 ovs-ctl[831]: Configuring Open vSwitch system IDs [  OK  ]
Oct  9 09:32:01 compute-0 ovs-vsctl[896]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  9 09:32:01 compute-0 ovs-ctl[831]: Enabling remote OVSDB managers [  OK  ]
Oct  9 09:32:01 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct  9 09:32:01 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  9 09:32:01 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  9 09:32:01 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  9 09:32:01 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct  9 09:32:01 compute-0 ovs-ctl[940]: Inserting openvswitch module [  OK  ]
Oct  9 09:32:01 compute-0 kernel: ovs-system: entered promiscuous mode
Oct  9 09:32:01 compute-0 systemd-udevd[719]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:32:01 compute-0 kernel: Timeout policy base is empty
Oct  9 09:32:01 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 09:32:01 compute-0 kernel: vlan22: entered promiscuous mode
Oct  9 09:32:01 compute-0 kernel: vlan23: entered promiscuous mode
Oct  9 09:32:01 compute-0 systemd-udevd[705]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:32:01 compute-0 kernel: vlan20: entered promiscuous mode
Oct  9 09:32:01 compute-0 kernel: vlan21: entered promiscuous mode
Oct  9 09:32:01 compute-0 ovs-ctl[909]: Starting ovs-vswitchd [  OK  ]
Oct  9 09:32:01 compute-0 ovs-vsctl[979]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  9 09:32:01 compute-0 ovs-ctl[909]: Enabling remote OVSDB managers [  OK  ]
Oct  9 09:32:01 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  9 09:32:01 compute-0 systemd[1]: Starting Open vSwitch...
Oct  9 09:32:01 compute-0 systemd[1]: Finished Open vSwitch.
Oct  9 09:32:01 compute-0 systemd[1]: Starting Network Manager...
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.3270] NetworkManager (version 1.54.1-1.el9) is starting... (boot:d4ee173d-694b-4462-a82c-c83fceecc69a)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.3273] Read config: /etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.3356] manager[0x556c76eaf040]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 09:32:01 compute-0 systemd[1]: Starting Hostname Service...
Oct  9 09:32:01 compute-0 systemd[1]: Started Hostname Service.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.3906] hostname: hostname: using hostnamed
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.3907] hostname: static hostname changed from (none) to "compute-0"
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.3910] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.3975] manager[0x556c76eaf040]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.3975] manager[0x556c76eaf040]: rfkill: WWAN hardware radio set enabled
Oct  9 09:32:01 compute-0 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4023] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4040] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4040] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4041] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4041] manager: Networking is enabled by state file
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4044] settings: Loaded settings plugin: keyfile (internal)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4065] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4141] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4164] dhcp: init: Using DHCP client 'internal'
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4167] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4179] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4189] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:32:01 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4199] device (lo): Activation: starting connection 'lo' (5464d8a9-004d-4958-a964-56e380113a8e)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4209] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4213] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4233] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4237] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4252] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/4)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4257] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4275] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/5)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4279] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4295] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/6)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4298] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4316] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/7)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4322] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4338] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4341] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4349] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4352] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4360] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4363] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4369] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4372] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4376] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/12)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4379] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4385] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/13)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4388] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4393] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/14)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4395] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4401] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4403] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 systemd[1]: Started Network Manager.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4412] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 09:32:01 compute-0 systemd[1]: Reached target Network.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4424] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4427] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4429] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4430] device (eth0): carrier: link connected
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4433] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4435] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4436] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4437] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4439] device (eth1): carrier: link connected
Oct  9 09:32:01 compute-0 kernel: vlan21: left promiscuous mode
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4476] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> unmanaged (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4480] device (vlan21)[Open vSwitch Port]: state change: unavailable -> unmanaged (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <warn>  [1760002321.4488] platform-linux: do-delete-link[7]: failure 95 (Operation not supported)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4490] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> unmanaged (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4494] device (vlan20)[Open vSwitch Port]: state change: unavailable -> unmanaged (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <warn>  [1760002321.4500] platform-linux: do-delete-link[6]: failure 95 (Operation not supported)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4502] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> unmanaged (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4505] device (vlan23)[Open vSwitch Port]: state change: unavailable -> unmanaged (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4512] device (eth1)[Open vSwitch Port]: state change: unavailable -> unmanaged (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <warn>  [1760002321.4517] platform-linux: do-delete-link[5]: failure 95 (Operation not supported)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4519] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> unmanaged (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4522] device (vlan22)[Open vSwitch Port]: state change: unavailable -> unmanaged (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4525] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> unmanaged (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4528] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4537] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:01 compute-0 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4568] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:01 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4606] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4608] policy: auto-activating connection 'vlan21-port' (f882b807-3011-4187-9841-e387c4d2de4d)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4610] policy: auto-activating connection 'vlan20-port' (04c091c8-5e99-4901-b4a0-c12c907af13d)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4612] policy: auto-activating connection 'vlan23-port' (2ea4eaee-669c-455a-920b-06e176356c59)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4612] policy: auto-activating connection 'eth1-port' (2db852a8-ab77-4c6e-a5d1-216b537c5a68)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4614] policy: auto-activating connection 'vlan22-port' (9b349756-d27f-4c19-93fe-704e56edeac5)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4614] policy: auto-activating connection 'br-ex-br' (944812b3-3b90-47e3-8b93-838bc65c423a)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4616] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 09:32:01 compute-0 kernel: vlan23: left promiscuous mode
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4620] policy: auto-activating connection 'ci-private-network' (99381071-70a1-5f50-b83c-41d249156268)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4621] policy: auto-activating connection 'br-ex-port' (a60672d3-3db4-47e5-9ab7-f15def14768c)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4622] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4625] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4627] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (f882b807-3011-4187-9841-e387c4d2de4d)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4628] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4631] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4634] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (04c091c8-5e99-4901-b4a0-c12c907af13d)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4636] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4640] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4643] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (2ea4eaee-669c-455a-920b-06e176356c59)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4644] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4647] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4649] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (2db852a8-ab77-4c6e-a5d1-216b537c5a68)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4650] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4652] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4654] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (9b349756-d27f-4c19-93fe-704e56edeac5)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4655] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4659] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4662] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (944812b3-3b90-47e3-8b93-838bc65c423a)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4662] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4668] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4669] device (eth1): Activation: starting connection 'ci-private-network' (99381071-70a1-5f50-b83c-41d249156268)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4671] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a60672d3-3db4-47e5-9ab7-f15def14768c)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4672] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4673] manager: NetworkManager state is now CONNECTING
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4674] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4675] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4676] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4711] device (eth1): disconnecting for new activation request.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4712] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4721] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4723] device (br-ex)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4726] device (br-ex)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4727] device (eth1)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4745] device (eth1)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4746] device (vlan20)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 kernel: virtio_net virtio5 eth1: left promiscuous mode
Oct  9 09:32:01 compute-0 kernel: vlan22: left promiscuous mode
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4765] device (vlan20)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4766] device (vlan21)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  9 09:32:01 compute-0 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 09:32:01 compute-0 systemd[1]: Reached target NFS client services.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4789] device (vlan21)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4790] device (vlan22)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 09:32:01 compute-0 systemd[1]: Reached target Remote File Systems.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4804] device (vlan22)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4805] device (vlan23)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4809] manager: NetworkManager state is now DISCONNECTING
Oct  9 09:32:01 compute-0 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4820] device (vlan23)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4821] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4822] manager: NetworkManager state is now CONNECTING
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4823] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4824] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4825] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4852] device (lo): Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4886] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4889] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4897] device (eth1): disconnecting for new activation request.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4900] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 kernel: vlan20: left promiscuous mode
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4912] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4955] device (eth1): Activation: starting connection 'ci-private-network' (99381071-70a1-5f50-b83c-41d249156268)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4958] device (br-ex)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4961] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a60672d3-3db4-47e5-9ab7-f15def14768c)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4963] device (eth1)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4966] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (2db852a8-ab77-4c6e-a5d1-216b537c5a68)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4968] device (vlan20)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4971] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (04c091c8-5e99-4901-b4a0-c12c907af13d)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4973] device (vlan21)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4992] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (f882b807-3011-4187-9841-e387c4d2de4d)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.4995] device (vlan22)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5002] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (9b349756-d27f-4c19-93fe-704e56edeac5)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5003] device (vlan23)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-0 kernel: ovs-system: left promiscuous mode
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5015] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (2ea4eaee-669c-455a-920b-06e176356c59)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5047] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5053] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5058] policy: auto-activating connection 'vlan20-if' (f68223c9-22b5-4a22-91f1-248bbd45fbf6)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5059] policy: auto-activating connection 'vlan21-if' (371dc3e7-0a85-453c-958d-dbfd32cbc4ba)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5060] policy: auto-activating connection 'vlan22-if' (80a52acc-166f-460e-87df-b0382c1fb0a2)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5062] policy: auto-activating connection 'vlan23-if' (ceaca123-ecf5-470a-80f3-07bc719dfebc)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5064] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5069] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5071] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5082] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5083] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5085] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5087] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5088] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5089] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5090] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5092] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5093] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5094] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5095] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5097] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5098] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5099] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5100] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5102] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5103] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5104] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5105] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5107] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5108] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5109] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5110] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5118] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5120] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5124] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (f68223c9-22b5-4a22-91f1-248bbd45fbf6)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5125] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5127] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5129] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (371dc3e7-0a85-453c-958d-dbfd32cbc4ba)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5130] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5138] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5141] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (80a52acc-166f-460e-87df-b0382c1fb0a2)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5143] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5145] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5148] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (ceaca123-ecf5-470a-80f3-07bc719dfebc)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5148] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5150] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5153] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5157] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5163] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5167] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5176] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5182] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5188] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5193] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5210] dhcp4 (eth0): state changed new lease, address=192.168.26.64
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5214] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5220] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5221] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5230] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5233] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 kernel: ovs-system: entered promiscuous mode
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5234] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5236] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5238] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5239] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5241] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 kernel: No such timeout policy "ovs_test_tp"
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5243] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5245] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5248] policy: auto-activating connection 'br-ex-if' (46bc0613-40c6-4f7e-baf9-ff45a946f10a)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5249] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5255] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5335] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5338] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5339] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5339] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5340] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5341] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5343] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5347] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (46bc0613-40c6-4f7e-baf9-ff45a946f10a)
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5349] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 kernel: vlan20: entered promiscuous mode
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5353] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5356] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5358] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5360] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5393] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5399] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5404] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5413] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5418] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5426] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5431] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5440] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5448] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5458] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5462] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5470] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5474] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5490] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5494] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5501] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5510] device (eth0): Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5515] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5521] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5527] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5536] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5544] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5553] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5557] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5566] device (eth1): Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5572] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5575] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5583] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 kernel: vlan22: entered promiscuous mode
Oct  9 09:32:01 compute-0 kernel: vlan23: entered promiscuous mode
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5711] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5719] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 kernel: vlan21: entered promiscuous mode
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5775] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5782] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5801] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5803] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5807] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5831] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5833] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5840] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5871] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5878] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5903] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5904] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.5912] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 kernel: br-ex: entered promiscuous mode
Oct  9 09:32:01 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.6072] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.6080] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.6110] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.6111] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.6118] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:01 compute-0 NetworkManager[982]: <info>  [1760002321.6125] manager: startup complete
Oct  9 09:32:01 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct  9 09:32:01 compute-0 systemd[1]: Starting Cloud-init: Network Stage...
Oct  9 09:32:01 compute-0 systemd[1]: Starting Authorization Manager...
Oct  9 09:32:01 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 09:32:01 compute-0 polkitd[1106]: Started polkitd version 0.117
Oct  9 09:32:01 compute-0 systemd[1]: Started Authorization Manager.
Oct  9 09:32:01 compute-0 cloud-init[1209]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 09:32:01 +0000. Up 6.42 seconds.
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   Device   |   Up  |     Address     |      Mask     | Scope  |     Hw-Address    |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   br-ex    |  True | 192.168.122.100 | 255.255.255.0 | global | fa:16:3e:91:b0:1f |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |    eth0    |  True |  192.168.26.64  | 255.255.255.0 | global | fa:16:3e:77:91:b1 |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |    eth1    |  True |        .        |       .       |   .    | fa:16:3e:91:b0:1f |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |     lo     |  True |    127.0.0.1    |   255.0.0.0   |  host  |         .         |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |     lo     |  True |     ::1/128     |       .       |  host  |         .         |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: | ovs-system | False |        .        |       .       |   .    | b2:a0:6f:89:45:9c |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   vlan20   |  True |   172.17.0.100  | 255.255.255.0 | global | 0e:93:ac:71:95:4e |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   vlan21   |  True |   172.18.0.100  | 255.255.255.0 | global | ca:70:52:62:cf:69 |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   vlan22   |  True |   172.19.0.100  | 255.255.255.0 | global | 4a:f3:29:96:b1:e4 |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   vlan23   |  True |   172.20.0.100  | 255.255.255.0 | global | fa:8f:14:ce:c1:3f |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   2   |    172.17.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan20  |   U   |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   3   |    172.18.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan21  |   U   |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   4   |    172.19.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan22  |   U   |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   5   |    172.20.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan23  |   U   |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   6   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   7   |  192.168.122.0  |   0.0.0.0    |  255.255.255.0  |   br-ex   |   U   |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: |   2   |  multicast  |    ::   |    eth1   |   U   |
Oct  9 09:32:01 compute-0 cloud-init[1209]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 09:32:02 compute-0 systemd[1]: Finished Cloud-init: Network Stage.
Oct  9 09:32:02 compute-0 systemd[1]: Reached target Cloud-config availability.
Oct  9 09:32:02 compute-0 systemd[1]: Reached target Network is Online.
Oct  9 09:32:02 compute-0 systemd[1]: Starting Cloud-init: Config Stage...
Oct  9 09:32:02 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct  9 09:32:02 compute-0 systemd[1]: Starting Notify NFS peers of a restart...
Oct  9 09:32:02 compute-0 systemd[1]: Starting System Logging Service...
Oct  9 09:32:02 compute-0 sm-notify[1242]: Version 2.5.4 starting
Oct  9 09:32:02 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct  9 09:32:02 compute-0 systemd[1]: Starting Permit User Sessions...
Oct  9 09:32:02 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct  9 09:32:02 compute-0 systemd[1]: Started Notify NFS peers of a restart.
Oct  9 09:32:02 compute-0 systemd[1]: Finished Permit User Sessions.
Oct  9 09:32:02 compute-0 systemd[1]: Started Command Scheduler.
Oct  9 09:32:02 compute-0 systemd[1]: Started Getty on tty1.
Oct  9 09:32:02 compute-0 systemd[1]: Started Serial Getty on ttyS0.
Oct  9 09:32:02 compute-0 systemd[1]: Reached target Login Prompts.
Oct  9 09:32:02 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct  9 09:32:02 compute-0 rsyslogd[1243]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1243" x-info="https://www.rsyslog.com"] start
Oct  9 09:32:02 compute-0 systemd[1]: Started System Logging Service.
Oct  9 09:32:02 compute-0 systemd[1]: Reached target Multi-User System.
Oct  9 09:32:02 compute-0 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  9 09:32:02 compute-0 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  9 09:32:02 compute-0 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  9 09:32:02 compute-0 rsyslogd[1243]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:32:02 compute-0 cloud-init[1256]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 09:32:02 +0000. Up 6.93 seconds.
Oct  9 09:32:02 compute-0 systemd[1]: Finished Cloud-init: Config Stage.
Oct  9 09:32:02 compute-0 systemd[1]: Starting Cloud-init: Final Stage...
Oct  9 09:32:02 compute-0 cloud-init[1260]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 09:32:02 +0000. Up 7.24 seconds.
Oct  9 09:32:02 compute-0 cloud-init[1260]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 09:32:02 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 7.28 seconds
Oct  9 09:32:02 compute-0 systemd[1]: Finished Cloud-init: Final Stage.
Oct  9 09:32:02 compute-0 systemd[1]: Reached target Cloud-init target.
Oct  9 09:32:02 compute-0 systemd[1]: Startup finished in 1.343s (kernel) + 1.867s (initrd) + 4.120s (userspace) = 7.331s.
Oct  9 09:32:10 compute-0 irqbalance[794]: Cannot change IRQ 45 affinity: Operation not permitted
Oct  9 09:32:10 compute-0 irqbalance[794]: IRQ 45 affinity is now unmanaged
Oct  9 09:32:10 compute-0 irqbalance[794]: Cannot change IRQ 43 affinity: Operation not permitted
Oct  9 09:32:10 compute-0 irqbalance[794]: IRQ 43 affinity is now unmanaged
Oct  9 09:32:10 compute-0 irqbalance[794]: Cannot change IRQ 42 affinity: Operation not permitted
Oct  9 09:32:10 compute-0 irqbalance[794]: IRQ 42 affinity is now unmanaged
Oct  9 09:32:11 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 09:32:31 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 09:32:51 compute-0 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 09:32:51 compute-0 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 09:32:51 compute-0 systemd-logind[798]: New session 1 of user zuul.
Oct  9 09:32:51 compute-0 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 09:32:51 compute-0 systemd[1]: Starting User Manager for UID 1000...
Oct  9 09:32:51 compute-0 systemd[1269]: Queued start job for default target Main User Target.
Oct  9 09:32:51 compute-0 rsyslogd[1243]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:32:51 compute-0 systemd[1269]: Created slice User Application Slice.
Oct  9 09:32:51 compute-0 systemd[1269]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:32:51 compute-0 systemd[1269]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:32:51 compute-0 systemd[1269]: Reached target Paths.
Oct  9 09:32:51 compute-0 systemd[1269]: Reached target Timers.
Oct  9 09:32:51 compute-0 systemd[1269]: Starting D-Bus User Message Bus Socket...
Oct  9 09:32:51 compute-0 systemd[1269]: Starting Create User's Volatile Files and Directories...
Oct  9 09:32:51 compute-0 systemd[1269]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:32:51 compute-0 systemd[1269]: Reached target Sockets.
Oct  9 09:32:51 compute-0 systemd[1269]: Finished Create User's Volatile Files and Directories.
Oct  9 09:32:51 compute-0 systemd[1269]: Reached target Basic System.
Oct  9 09:32:51 compute-0 systemd[1269]: Reached target Main User Target.
Oct  9 09:32:51 compute-0 systemd[1269]: Startup finished in 87ms.
Oct  9 09:32:51 compute-0 systemd[1]: Started User Manager for UID 1000.
Oct  9 09:32:51 compute-0 systemd[1]: Started Session 1 of User zuul.
Oct  9 09:32:51 compute-0 python3.9[1494]: ansible-ansible.builtin.file Invoked with path=/var/lib/openstack/reboot_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:32:52 compute-0 systemd[1]: session-1.scope: Deactivated successfully.
Oct  9 09:32:52 compute-0 systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Oct  9 09:32:52 compute-0 systemd-logind[798]: Removed session 1.
Oct  9 09:32:58 compute-0 systemd-logind[798]: New session 3 of user zuul.
Oct  9 09:32:58 compute-0 systemd[1]: Started Session 3 of User zuul.
Oct  9 09:33:02 compute-0 python3[2260]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:33:04 compute-0 python3[2351]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 09:33:05 compute-0 python3[2378]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:33:05 compute-0 python3[2404]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:33:06 compute-0 kernel: loop: module loaded
Oct  9 09:33:06 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Oct  9 09:33:06 compute-0 python3[2439]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:33:06 compute-0 lvm[2442]: PV /dev/loop3 not used.
Oct  9 09:33:06 compute-0 lvm[2451]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:33:06 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  9 09:33:06 compute-0 lvm[2453]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  9 09:33:06 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  9 09:33:06 compute-0 python3[2531]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 09:33:07 compute-0 python3[2604]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002386.7540903-33833-140024934150508/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:33:07 compute-0 python3[2654]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:33:07 compute-0 systemd[1]: Reloading.
Oct  9 09:33:07 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:33:07 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:33:07 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct  9 09:33:07 compute-0 bash[2693]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Oct  9 09:33:07 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct  9 09:33:07 compute-0 lvm[2694]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:33:07 compute-0 lvm[2694]: VG ceph_vg0 finished
Oct  9 09:33:09 compute-0 python3[2718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:33:11 compute-0 python3[2811]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 09:33:13 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct  9 09:33:13 compute-0 systemd[1]: Started PackageKit Daemon.
Oct  9 09:33:13 compute-0 python3[2876]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 09:33:15 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 09:33:15 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct  9 09:33:16 compute-0 python3[2930]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:33:16 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 09:33:16 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct  9 09:33:16 compute-0 systemd[1]: run-r594e7b04a4324fc4912ef4712c52fa11.service: Deactivated successfully.
Oct  9 09:33:16 compute-0 python3[3023]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1981779606-merged.mount: Deactivated successfully.
Oct  9 09:33:16 compute-0 kernel: evm: overlay not supported
Oct  9 09:33:16 compute-0 podman[3025]: 2025-10-09 09:33:16.83177228 +0000 UTC m=+0.054353022 system refresh
Oct  9 09:33:17 compute-0 python3[3083]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:33:17 compute-0 python3[3109]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:33:18 compute-0 python3[3187]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 09:33:18 compute-0 python3[3260]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002398.0327969-34025-23548568653621/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:33:18 compute-0 python3[3362]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 09:33:19 compute-0 python3[3435]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002398.7977576-34043-233921447660860/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:33:19 compute-0 python3[3485]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:33:19 compute-0 python3[3513]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:33:19 compute-0 python3[3541]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:33:20 compute-0 python3[3569]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:33:20 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct  9 09:33:20 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  9 09:33:20 compute-0 systemd-logind[798]: New session 4 of user ceph-admin.
Oct  9 09:33:20 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  9 09:33:20 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct  9 09:33:20 compute-0 systemd[3577]: Queued start job for default target Main User Target.
Oct  9 09:33:20 compute-0 rsyslogd[1243]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:33:20 compute-0 systemd[3577]: Created slice User Application Slice.
Oct  9 09:33:20 compute-0 systemd[3577]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:33:20 compute-0 systemd[3577]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:33:20 compute-0 systemd[3577]: Reached target Paths.
Oct  9 09:33:20 compute-0 systemd[3577]: Reached target Timers.
Oct  9 09:33:20 compute-0 systemd[3577]: Starting D-Bus User Message Bus Socket...
Oct  9 09:33:20 compute-0 systemd[3577]: Starting Create User's Volatile Files and Directories...
Oct  9 09:33:20 compute-0 systemd[3577]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:33:20 compute-0 systemd[3577]: Reached target Sockets.
Oct  9 09:33:20 compute-0 systemd[3577]: Finished Create User's Volatile Files and Directories.
Oct  9 09:33:20 compute-0 systemd[3577]: Reached target Basic System.
Oct  9 09:33:20 compute-0 systemd[3577]: Reached target Main User Target.
Oct  9 09:33:20 compute-0 systemd[3577]: Startup finished in 85ms.
Oct  9 09:33:20 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct  9 09:33:20 compute-0 systemd[1]: Started Session 4 of User ceph-admin.
Oct  9 09:33:20 compute-0 systemd[1]: session-4.scope: Deactivated successfully.
Oct  9 09:33:20 compute-0 systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Oct  9 09:33:20 compute-0 systemd-logind[798]: Removed session 4.
Oct  9 09:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1504427698-lower\x2dmapped.mount: Deactivated successfully.
Oct  9 09:33:30 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct  9 09:33:30 compute-0 systemd[3577]: Activating special unit Exit the Session...
Oct  9 09:33:30 compute-0 systemd[3577]: Stopped target Main User Target.
Oct  9 09:33:30 compute-0 systemd[3577]: Stopped target Basic System.
Oct  9 09:33:30 compute-0 systemd[3577]: Stopped target Paths.
Oct  9 09:33:30 compute-0 systemd[3577]: Stopped target Sockets.
Oct  9 09:33:30 compute-0 systemd[3577]: Stopped target Timers.
Oct  9 09:33:30 compute-0 systemd[3577]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:33:30 compute-0 systemd[3577]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:33:30 compute-0 systemd[3577]: Closed D-Bus User Message Bus Socket.
Oct  9 09:33:30 compute-0 systemd[3577]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:33:30 compute-0 systemd[3577]: Removed slice User Application Slice.
Oct  9 09:33:30 compute-0 systemd[3577]: Reached target Shutdown.
Oct  9 09:33:30 compute-0 systemd[3577]: Finished Exit the Session.
Oct  9 09:33:30 compute-0 systemd[3577]: Reached target Exit the Session.
Oct  9 09:33:30 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct  9 09:33:30 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct  9 09:33:30 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  9 09:33:30 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  9 09:33:30 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  9 09:33:30 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  9 09:33:30 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct  9 09:33:37 compute-0 podman[3666]: 2025-10-09 09:33:37.44620919 +0000 UTC m=+16.812880636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:33:37 compute-0 podman[3716]: 2025-10-09 09:33:37.48880857 +0000 UTC m=+0.026435895 container create c76bc26ab5dd3f682ca0adaa167f9e6890a71c93a980c2ec86757e2f8f1d0acc (image=quay.io/ceph/ceph:v19, name=jovial_benz, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 09:33:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1618356046-merged.mount: Deactivated successfully.
Oct  9 09:33:37 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  9 09:33:37 compute-0 systemd[1]: Started libpod-conmon-c76bc26ab5dd3f682ca0adaa167f9e6890a71c93a980c2ec86757e2f8f1d0acc.scope.
Oct  9 09:33:37 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:37 compute-0 podman[3716]: 2025-10-09 09:33:37.548675337 +0000 UTC m=+0.086302662 container init c76bc26ab5dd3f682ca0adaa167f9e6890a71c93a980c2ec86757e2f8f1d0acc (image=quay.io/ceph/ceph:v19, name=jovial_benz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 09:33:37 compute-0 podman[3716]: 2025-10-09 09:33:37.552998478 +0000 UTC m=+0.090625803 container start c76bc26ab5dd3f682ca0adaa167f9e6890a71c93a980c2ec86757e2f8f1d0acc (image=quay.io/ceph/ceph:v19, name=jovial_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:33:37 compute-0 podman[3716]: 2025-10-09 09:33:37.554205094 +0000 UTC m=+0.091832418 container attach c76bc26ab5dd3f682ca0adaa167f9e6890a71c93a980c2ec86757e2f8f1d0acc (image=quay.io/ceph/ceph:v19, name=jovial_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:33:37 compute-0 podman[3716]: 2025-10-09 09:33:37.477986374 +0000 UTC m=+0.015613720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:37 compute-0 jovial_benz[3729]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct  9 09:33:37 compute-0 systemd[1]: libpod-c76bc26ab5dd3f682ca0adaa167f9e6890a71c93a980c2ec86757e2f8f1d0acc.scope: Deactivated successfully.
Oct  9 09:33:37 compute-0 podman[3716]: 2025-10-09 09:33:37.633740845 +0000 UTC m=+0.171368181 container died c76bc26ab5dd3f682ca0adaa167f9e6890a71c93a980c2ec86757e2f8f1d0acc (image=quay.io/ceph/ceph:v19, name=jovial_benz, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:33:37 compute-0 podman[3716]: 2025-10-09 09:33:37.650524021 +0000 UTC m=+0.188151345 container remove c76bc26ab5dd3f682ca0adaa167f9e6890a71c93a980c2ec86757e2f8f1d0acc (image=quay.io/ceph/ceph:v19, name=jovial_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:33:37 compute-0 systemd[1]: libpod-conmon-c76bc26ab5dd3f682ca0adaa167f9e6890a71c93a980c2ec86757e2f8f1d0acc.scope: Deactivated successfully.
Oct  9 09:33:37 compute-0 podman[3743]: 2025-10-09 09:33:37.691288301 +0000 UTC m=+0.025795848 container create a28972daa1179a9df9ed11963dff5e139950bdca1ebc818dfe8607a41cfffdd8 (image=quay.io/ceph/ceph:v19, name=crazy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:37 compute-0 systemd[1]: Started libpod-conmon-a28972daa1179a9df9ed11963dff5e139950bdca1ebc818dfe8607a41cfffdd8.scope.
Oct  9 09:33:37 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:37 compute-0 podman[3743]: 2025-10-09 09:33:37.729475683 +0000 UTC m=+0.063983240 container init a28972daa1179a9df9ed11963dff5e139950bdca1ebc818dfe8607a41cfffdd8 (image=quay.io/ceph/ceph:v19, name=crazy_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 09:33:37 compute-0 podman[3743]: 2025-10-09 09:33:37.733660382 +0000 UTC m=+0.068167930 container start a28972daa1179a9df9ed11963dff5e139950bdca1ebc818dfe8607a41cfffdd8 (image=quay.io/ceph/ceph:v19, name=crazy_zhukovsky, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:33:37 compute-0 podman[3743]: 2025-10-09 09:33:37.734864102 +0000 UTC m=+0.069371649 container attach a28972daa1179a9df9ed11963dff5e139950bdca1ebc818dfe8607a41cfffdd8 (image=quay.io/ceph/ceph:v19, name=crazy_zhukovsky, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:37 compute-0 crazy_zhukovsky[3758]: 167 167
Oct  9 09:33:37 compute-0 systemd[1]: libpod-a28972daa1179a9df9ed11963dff5e139950bdca1ebc818dfe8607a41cfffdd8.scope: Deactivated successfully.
Oct  9 09:33:37 compute-0 podman[3743]: 2025-10-09 09:33:37.736583554 +0000 UTC m=+0.071091111 container died a28972daa1179a9df9ed11963dff5e139950bdca1ebc818dfe8607a41cfffdd8 (image=quay.io/ceph/ceph:v19, name=crazy_zhukovsky, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:37 compute-0 podman[3743]: 2025-10-09 09:33:37.750552041 +0000 UTC m=+0.085059589 container remove a28972daa1179a9df9ed11963dff5e139950bdca1ebc818dfe8607a41cfffdd8 (image=quay.io/ceph/ceph:v19, name=crazy_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Oct  9 09:33:37 compute-0 podman[3743]: 2025-10-09 09:33:37.681522737 +0000 UTC m=+0.016030305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:37 compute-0 systemd[1]: libpod-conmon-a28972daa1179a9df9ed11963dff5e139950bdca1ebc818dfe8607a41cfffdd8.scope: Deactivated successfully.
Oct  9 09:33:37 compute-0 podman[3772]: 2025-10-09 09:33:37.789203408 +0000 UTC m=+0.024970263 container create b65e024adc7c603a03e0cfa0a605b2b63f7f57db57171302d4b27b4ce0ac8cf5 (image=quay.io/ceph/ceph:v19, name=exciting_curran, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:37 compute-0 systemd[1]: Started libpod-conmon-b65e024adc7c603a03e0cfa0a605b2b63f7f57db57171302d4b27b4ce0ac8cf5.scope.
Oct  9 09:33:37 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:37 compute-0 podman[3772]: 2025-10-09 09:33:37.826831745 +0000 UTC m=+0.062598630 container init b65e024adc7c603a03e0cfa0a605b2b63f7f57db57171302d4b27b4ce0ac8cf5 (image=quay.io/ceph/ceph:v19, name=exciting_curran, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:33:37 compute-0 podman[3772]: 2025-10-09 09:33:37.83066861 +0000 UTC m=+0.066435475 container start b65e024adc7c603a03e0cfa0a605b2b63f7f57db57171302d4b27b4ce0ac8cf5 (image=quay.io/ceph/ceph:v19, name=exciting_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Oct  9 09:33:37 compute-0 podman[3772]: 2025-10-09 09:33:37.831823016 +0000 UTC m=+0.067589881 container attach b65e024adc7c603a03e0cfa0a605b2b63f7f57db57171302d4b27b4ce0ac8cf5 (image=quay.io/ceph/ceph:v19, name=exciting_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:33:37 compute-0 exciting_curran[3785]: AQBxgedoqi5jMhAA3GqjMGh9OJ2EhXsD+CAEOw==
Oct  9 09:33:37 compute-0 systemd[1]: libpod-b65e024adc7c603a03e0cfa0a605b2b63f7f57db57171302d4b27b4ce0ac8cf5.scope: Deactivated successfully.
Oct  9 09:33:37 compute-0 podman[3772]: 2025-10-09 09:33:37.84768322 +0000 UTC m=+0.083450086 container died b65e024adc7c603a03e0cfa0a605b2b63f7f57db57171302d4b27b4ce0ac8cf5 (image=quay.io/ceph/ceph:v19, name=exciting_curran, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 09:33:37 compute-0 podman[3772]: 2025-10-09 09:33:37.86175404 +0000 UTC m=+0.097520905 container remove b65e024adc7c603a03e0cfa0a605b2b63f7f57db57171302d4b27b4ce0ac8cf5 (image=quay.io/ceph/ceph:v19, name=exciting_curran, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:33:37 compute-0 podman[3772]: 2025-10-09 09:33:37.779021369 +0000 UTC m=+0.014788255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:37 compute-0 systemd[1]: libpod-conmon-b65e024adc7c603a03e0cfa0a605b2b63f7f57db57171302d4b27b4ce0ac8cf5.scope: Deactivated successfully.
Oct  9 09:33:37 compute-0 podman[3799]: 2025-10-09 09:33:37.901863957 +0000 UTC m=+0.024674254 container create fc04cb0fd280ac3310868175b95644bea52d9d7b17f1722ec5f7523a8df3e35c (image=quay.io/ceph/ceph:v19, name=hopeful_mcnulty, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:37 compute-0 systemd[1]: Started libpod-conmon-fc04cb0fd280ac3310868175b95644bea52d9d7b17f1722ec5f7523a8df3e35c.scope.
Oct  9 09:33:37 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:37 compute-0 podman[3799]: 2025-10-09 09:33:37.942832172 +0000 UTC m=+0.065642489 container init fc04cb0fd280ac3310868175b95644bea52d9d7b17f1722ec5f7523a8df3e35c (image=quay.io/ceph/ceph:v19, name=hopeful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 09:33:37 compute-0 podman[3799]: 2025-10-09 09:33:37.946864113 +0000 UTC m=+0.069674412 container start fc04cb0fd280ac3310868175b95644bea52d9d7b17f1722ec5f7523a8df3e35c (image=quay.io/ceph/ceph:v19, name=hopeful_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:33:37 compute-0 podman[3799]: 2025-10-09 09:33:37.947896441 +0000 UTC m=+0.070706739 container attach fc04cb0fd280ac3310868175b95644bea52d9d7b17f1722ec5f7523a8df3e35c (image=quay.io/ceph/ceph:v19, name=hopeful_mcnulty, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:33:37 compute-0 hopeful_mcnulty[3816]: AQBxgedoDg1SORAAI2hMijycCW7fcFHgM6wnSQ==
Oct  9 09:33:37 compute-0 systemd[1]: libpod-fc04cb0fd280ac3310868175b95644bea52d9d7b17f1722ec5f7523a8df3e35c.scope: Deactivated successfully.
Oct  9 09:33:37 compute-0 podman[3799]: 2025-10-09 09:33:37.964333482 +0000 UTC m=+0.087143790 container died fc04cb0fd280ac3310868175b95644bea52d9d7b17f1722ec5f7523a8df3e35c (image=quay.io/ceph/ceph:v19, name=hopeful_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 09:33:37 compute-0 podman[3799]: 2025-10-09 09:33:37.978739173 +0000 UTC m=+0.101549471 container remove fc04cb0fd280ac3310868175b95644bea52d9d7b17f1722ec5f7523a8df3e35c (image=quay.io/ceph/ceph:v19, name=hopeful_mcnulty, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 09:33:37 compute-0 podman[3799]: 2025-10-09 09:33:37.89195303 +0000 UTC m=+0.014763348 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:37 compute-0 systemd[1]: libpod-conmon-fc04cb0fd280ac3310868175b95644bea52d9d7b17f1722ec5f7523a8df3e35c.scope: Deactivated successfully.
Oct  9 09:33:38 compute-0 podman[3830]: 2025-10-09 09:33:38.018327236 +0000 UTC m=+0.025925664 container create b0f27e49bc7e1693036b78a1847fb7b752054f33be387144f05e53d45ee82b26 (image=quay.io/ceph/ceph:v19, name=frosty_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:38 compute-0 systemd[1]: Started libpod-conmon-b0f27e49bc7e1693036b78a1847fb7b752054f33be387144f05e53d45ee82b26.scope.
Oct  9 09:33:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:38 compute-0 podman[3830]: 2025-10-09 09:33:38.057690174 +0000 UTC m=+0.065288612 container init b0f27e49bc7e1693036b78a1847fb7b752054f33be387144f05e53d45ee82b26 (image=quay.io/ceph/ceph:v19, name=frosty_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Oct  9 09:33:38 compute-0 podman[3830]: 2025-10-09 09:33:38.061439493 +0000 UTC m=+0.069037921 container start b0f27e49bc7e1693036b78a1847fb7b752054f33be387144f05e53d45ee82b26 (image=quay.io/ceph/ceph:v19, name=frosty_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:38 compute-0 podman[3830]: 2025-10-09 09:33:38.062555037 +0000 UTC m=+0.070153485 container attach b0f27e49bc7e1693036b78a1847fb7b752054f33be387144f05e53d45ee82b26 (image=quay.io/ceph/ceph:v19, name=frosty_ardinghelli, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Oct  9 09:33:38 compute-0 frosty_ardinghelli[3846]: AQBygedo6wyLBBAAaVIy95WTA1+fypkMmxbBTg==
Oct  9 09:33:38 compute-0 systemd[1]: libpod-b0f27e49bc7e1693036b78a1847fb7b752054f33be387144f05e53d45ee82b26.scope: Deactivated successfully.
Oct  9 09:33:38 compute-0 podman[3830]: 2025-10-09 09:33:38.078249889 +0000 UTC m=+0.085848317 container died b0f27e49bc7e1693036b78a1847fb7b752054f33be387144f05e53d45ee82b26 (image=quay.io/ceph/ceph:v19, name=frosty_ardinghelli, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:38 compute-0 podman[3830]: 2025-10-09 09:33:38.094204982 +0000 UTC m=+0.101803410 container remove b0f27e49bc7e1693036b78a1847fb7b752054f33be387144f05e53d45ee82b26 (image=quay.io/ceph/ceph:v19, name=frosty_ardinghelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:33:38 compute-0 podman[3830]: 2025-10-09 09:33:38.008734919 +0000 UTC m=+0.016333367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:38 compute-0 systemd[1]: libpod-conmon-b0f27e49bc7e1693036b78a1847fb7b752054f33be387144f05e53d45ee82b26.scope: Deactivated successfully.
Oct  9 09:33:38 compute-0 podman[3862]: 2025-10-09 09:33:38.13272494 +0000 UTC m=+0.024349972 container create f3e5da0b1771365d25542d445eeaec801410e691c660564032acaf556684c01c (image=quay.io/ceph/ceph:v19, name=angry_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:33:38 compute-0 systemd[1]: Started libpod-conmon-f3e5da0b1771365d25542d445eeaec801410e691c660564032acaf556684c01c.scope.
Oct  9 09:33:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b586ef6912eb8bd5425a97b5b7d2ac06e609153a138d2fbf83d63f765c3be2c5/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:38 compute-0 podman[3862]: 2025-10-09 09:33:38.178343172 +0000 UTC m=+0.069968214 container init f3e5da0b1771365d25542d445eeaec801410e691c660564032acaf556684c01c (image=quay.io/ceph/ceph:v19, name=angry_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:38 compute-0 podman[3862]: 2025-10-09 09:33:38.181904817 +0000 UTC m=+0.073529851 container start f3e5da0b1771365d25542d445eeaec801410e691c660564032acaf556684c01c (image=quay.io/ceph/ceph:v19, name=angry_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:33:38 compute-0 podman[3862]: 2025-10-09 09:33:38.182983382 +0000 UTC m=+0.074608414 container attach f3e5da0b1771365d25542d445eeaec801410e691c660564032acaf556684c01c (image=quay.io/ceph/ceph:v19, name=angry_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:38 compute-0 angry_wilbur[3877]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct  9 09:33:38 compute-0 angry_wilbur[3877]: setting min_mon_release = quincy
Oct  9 09:33:38 compute-0 angry_wilbur[3877]: /usr/bin/monmaptool: set fsid to 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:38 compute-0 angry_wilbur[3877]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct  9 09:33:38 compute-0 systemd[1]: libpod-f3e5da0b1771365d25542d445eeaec801410e691c660564032acaf556684c01c.scope: Deactivated successfully.
Oct  9 09:33:38 compute-0 podman[3862]: 2025-10-09 09:33:38.122933278 +0000 UTC m=+0.014558330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:38 compute-0 podman[3884]: 2025-10-09 09:33:38.22753879 +0000 UTC m=+0.015954573 container died f3e5da0b1771365d25542d445eeaec801410e691c660564032acaf556684c01c (image=quay.io/ceph/ceph:v19, name=angry_wilbur, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:33:38 compute-0 podman[3884]: 2025-10-09 09:33:38.241803876 +0000 UTC m=+0.030219659 container remove f3e5da0b1771365d25542d445eeaec801410e691c660564032acaf556684c01c (image=quay.io/ceph/ceph:v19, name=angry_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 09:33:38 compute-0 systemd[1]: libpod-conmon-f3e5da0b1771365d25542d445eeaec801410e691c660564032acaf556684c01c.scope: Deactivated successfully.
Oct  9 09:33:38 compute-0 podman[3896]: 2025-10-09 09:33:38.287525764 +0000 UTC m=+0.026457196 container create 762050b2ea75bcd67240c602156d52934fae100b3f280f64a9a3cb25a61facda (image=quay.io/ceph/ceph:v19, name=quizzical_turing, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:38 compute-0 systemd[1]: Started libpod-conmon-762050b2ea75bcd67240c602156d52934fae100b3f280f64a9a3cb25a61facda.scope.
Oct  9 09:33:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520dd86392ea640741585b172795be9302baa22d07ce1a960389e04efe35cbc6/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520dd86392ea640741585b172795be9302baa22d07ce1a960389e04efe35cbc6/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520dd86392ea640741585b172795be9302baa22d07ce1a960389e04efe35cbc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520dd86392ea640741585b172795be9302baa22d07ce1a960389e04efe35cbc6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:38 compute-0 podman[3896]: 2025-10-09 09:33:38.328622692 +0000 UTC m=+0.067554113 container init 762050b2ea75bcd67240c602156d52934fae100b3f280f64a9a3cb25a61facda (image=quay.io/ceph/ceph:v19, name=quizzical_turing, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:38 compute-0 podman[3896]: 2025-10-09 09:33:38.332216587 +0000 UTC m=+0.071148009 container start 762050b2ea75bcd67240c602156d52934fae100b3f280f64a9a3cb25a61facda (image=quay.io/ceph/ceph:v19, name=quizzical_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:33:38 compute-0 podman[3896]: 2025-10-09 09:33:38.33323037 +0000 UTC m=+0.072161790 container attach 762050b2ea75bcd67240c602156d52934fae100b3f280f64a9a3cb25a61facda (image=quay.io/ceph/ceph:v19, name=quizzical_turing, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:38 compute-0 systemd[1]: libpod-762050b2ea75bcd67240c602156d52934fae100b3f280f64a9a3cb25a61facda.scope: Deactivated successfully.
Oct  9 09:33:38 compute-0 podman[3896]: 2025-10-09 09:33:38.371760226 +0000 UTC m=+0.110691647 container died 762050b2ea75bcd67240c602156d52934fae100b3f280f64a9a3cb25a61facda (image=quay.io/ceph/ceph:v19, name=quizzical_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:33:38 compute-0 podman[3896]: 2025-10-09 09:33:38.276811091 +0000 UTC m=+0.015742532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:38 compute-0 podman[3896]: 2025-10-09 09:33:38.386960626 +0000 UTC m=+0.125892047 container remove 762050b2ea75bcd67240c602156d52934fae100b3f280f64a9a3cb25a61facda (image=quay.io/ceph/ceph:v19, name=quizzical_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:33:38 compute-0 systemd[1]: libpod-conmon-762050b2ea75bcd67240c602156d52934fae100b3f280f64a9a3cb25a61facda.scope: Deactivated successfully.
Oct  9 09:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1143b6112622e4cc3afcce2f383e9284074205f4c29acf30d8a3c281d8f7ef02-merged.mount: Deactivated successfully.
Oct  9 09:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:33:38 compute-0 systemd[1]: Reloading.
Oct  9 09:33:38 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:33:38 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:33:38 compute-0 systemd[1]: Reloading.
Oct  9 09:33:38 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:33:38 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:33:38 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct  9 09:33:38 compute-0 systemd[1]: Reloading.
Oct  9 09:33:38 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:33:38 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:33:38 compute-0 systemd[1]: Reached target Ceph cluster 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:33:39 compute-0 systemd[1]: Reloading.
Oct  9 09:33:39 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:33:39 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:33:39 compute-0 systemd[1]: Reloading.
Oct  9 09:33:39 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:33:39 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:33:39 compute-0 systemd[1]: Created slice Slice /system/ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:33:39 compute-0 systemd[1]: Reached target System Time Set.
Oct  9 09:33:39 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct  9 09:33:39 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:33:39 compute-0 podman[4177]: 2025-10-09 09:33:39.612393645 +0000 UTC m=+0.028133065 container create 63a15ed9f3324cb9f1b7e7e825513995921bcfe4f0ca2788677b2f019ebec561 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e83b21269cec4a0f3de850701a086a4c0b7722823dff469becf841c831dc9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e83b21269cec4a0f3de850701a086a4c0b7722823dff469becf841c831dc9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e83b21269cec4a0f3de850701a086a4c0b7722823dff469becf841c831dc9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92e83b21269cec4a0f3de850701a086a4c0b7722823dff469becf841c831dc9c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:39 compute-0 podman[4177]: 2025-10-09 09:33:39.651429105 +0000 UTC m=+0.067168546 container init 63a15ed9f3324cb9f1b7e7e825513995921bcfe4f0ca2788677b2f019ebec561 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:39 compute-0 podman[4177]: 2025-10-09 09:33:39.656328793 +0000 UTC m=+0.072068215 container start 63a15ed9f3324cb9f1b7e7e825513995921bcfe4f0ca2788677b2f019ebec561 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:33:39 compute-0 bash[4177]: 63a15ed9f3324cb9f1b7e7e825513995921bcfe4f0ca2788677b2f019ebec561
Oct  9 09:33:39 compute-0 podman[4177]: 2025-10-09 09:33:39.600729342 +0000 UTC m=+0.016468783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:39 compute-0 systemd[1]: Started Ceph mon.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:33:39 compute-0 ceph-mon[4193]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:33:39 compute-0 ceph-mon[4193]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct  9 09:33:39 compute-0 ceph-mon[4193]: pidfile_write: ignore empty --pid-file
Oct  9 09:33:39 compute-0 ceph-mon[4193]: load: jerasure load: lrc 
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: RocksDB version: 7.9.2
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Git sha 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: DB SUMMARY
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: DB Session ID:  IRGZWZ8L1C6S4YZSD7XL
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: CURRENT file:  CURRENT
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                         Options.error_if_exists: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                       Options.create_if_missing: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                                     Options.env: 0x55ce31509c20
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                                Options.info_log: 0x55ce32811940
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                              Options.statistics: (nil)
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                               Options.use_fsync: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                              Options.db_log_dir: 
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                                 Options.wal_dir: 
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                    Options.write_buffer_manager: 0x55ce32815900
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                  Options.unordered_write: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                               Options.row_cache: None
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                              Options.wal_filter: None
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.two_write_queues: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.wal_compression: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.atomic_flush: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.max_background_jobs: 2
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.max_background_compactions: -1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.max_subcompactions: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.max_total_wal_size: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                          Options.max_open_files: -1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:       Options.compaction_readahead_size: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Compression algorithms supported:
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: #011kZSTD supported: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: #011kXpressCompression supported: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: #011kZlibCompression supported: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:           Options.merge_operator: 
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:        Options.compaction_filter: None
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ce328115e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55ce328349b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:        Options.write_buffer_size: 33554432
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:  Options.max_write_buffer_number: 2
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:          Options.compression: NoCompression
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.num_levels: 7
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ba1e7fee-fdf5-47b8-8729-cc5ad901148d
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002419687539, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002419690019, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "IRGZWZ8L1C6S4YZSD7XL", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002419690116, "job": 1, "event": "recovery_finished"}
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ce32836e00
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: DB pointer 0x55ce32846000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:33:39 compute-0 ceph-mon[4193]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.22 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.22 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ce328349b0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 09:33:39 compute-0 ceph-mon[4193]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@-1(???) e0 preinit fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(probing) e0 win_standalone_election
Oct  9 09:33:39 compute-0 ceph-mon[4193]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  9 09:33:39 compute-0 ceph-mon[4193]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [DBG] : fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [DBG] : last_changed 2025-10-09T09:33:38.201593+0000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [DBG] : created 2025-10-09T09:33:38.201593+0000
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC 7763 64-Core Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865152,os=Linux}
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).mds e1 new map
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-10-09T09:33:39:705322+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [DBG] : fsmap 
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mkfs 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct  9 09:33:39 compute-0 podman[4194]: 2025-10-09 09:33:39.709392464 +0000 UTC m=+0.031752410 container create 1242dbca335ac6e237c0d80e3fd4701a090eb8604983b519c47e97235b22466a (image=quay.io/ceph/ceph:v19, name=zealous_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  9 09:33:39 compute-0 systemd[1]: Started libpod-conmon-1242dbca335ac6e237c0d80e3fd4701a090eb8604983b519c47e97235b22466a.scope.
Oct  9 09:33:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18500f10e8f4ca0ee039ad391f8b0f01c40db40538d1c1755e188877933881b0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18500f10e8f4ca0ee039ad391f8b0f01c40db40538d1c1755e188877933881b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18500f10e8f4ca0ee039ad391f8b0f01c40db40538d1c1755e188877933881b0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:39 compute-0 podman[4194]: 2025-10-09 09:33:39.762467927 +0000 UTC m=+0.084827862 container init 1242dbca335ac6e237c0d80e3fd4701a090eb8604983b519c47e97235b22466a (image=quay.io/ceph/ceph:v19, name=zealous_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:33:39 compute-0 podman[4194]: 2025-10-09 09:33:39.766915152 +0000 UTC m=+0.089275087 container start 1242dbca335ac6e237c0d80e3fd4701a090eb8604983b519c47e97235b22466a (image=quay.io/ceph/ceph:v19, name=zealous_villani, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 09:33:39 compute-0 podman[4194]: 2025-10-09 09:33:39.767996821 +0000 UTC m=+0.090356778 container attach 1242dbca335ac6e237c0d80e3fd4701a090eb8604983b519c47e97235b22466a (image=quay.io/ceph/ceph:v19, name=zealous_villani, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:33:39 compute-0 podman[4194]: 2025-10-09 09:33:39.69662994 +0000 UTC m=+0.018989886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:39 compute-0 ceph-mon[4193]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 09:33:39 compute-0 ceph-mon[4193]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4277487243' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 09:33:39 compute-0 zealous_villani[4245]:  cluster:
Oct  9 09:33:39 compute-0 zealous_villani[4245]:    id:     286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:39 compute-0 zealous_villani[4245]:    health: HEALTH_OK
Oct  9 09:33:39 compute-0 zealous_villani[4245]: 
Oct  9 09:33:39 compute-0 zealous_villani[4245]:  services:
Oct  9 09:33:39 compute-0 zealous_villani[4245]:    mon: 1 daemons, quorum compute-0 (age 0.205692s)
Oct  9 09:33:39 compute-0 zealous_villani[4245]:    mgr: no daemons active
Oct  9 09:33:39 compute-0 zealous_villani[4245]:    osd: 0 osds: 0 up, 0 in
Oct  9 09:33:39 compute-0 zealous_villani[4245]: 
Oct  9 09:33:39 compute-0 zealous_villani[4245]:  data:
Oct  9 09:33:39 compute-0 zealous_villani[4245]:    pools:   0 pools, 0 pgs
Oct  9 09:33:39 compute-0 zealous_villani[4245]:    objects: 0 objects, 0 B
Oct  9 09:33:39 compute-0 zealous_villani[4245]:    usage:   0 B used, 0 B / 0 B avail
Oct  9 09:33:39 compute-0 zealous_villani[4245]:    pgs:     
Oct  9 09:33:39 compute-0 zealous_villani[4245]: 
Oct  9 09:33:39 compute-0 systemd[1]: libpod-1242dbca335ac6e237c0d80e3fd4701a090eb8604983b519c47e97235b22466a.scope: Deactivated successfully.
Oct  9 09:33:39 compute-0 conmon[4245]: conmon 1242dbca335ac6e237c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1242dbca335ac6e237c0d80e3fd4701a090eb8604983b519c47e97235b22466a.scope/container/memory.events
Oct  9 09:33:39 compute-0 podman[4194]: 2025-10-09 09:33:39.922090321 +0000 UTC m=+0.244450278 container died 1242dbca335ac6e237c0d80e3fd4701a090eb8604983b519c47e97235b22466a (image=quay.io/ceph/ceph:v19, name=zealous_villani, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:39 compute-0 podman[4194]: 2025-10-09 09:33:39.940886571 +0000 UTC m=+0.263246507 container remove 1242dbca335ac6e237c0d80e3fd4701a090eb8604983b519c47e97235b22466a (image=quay.io/ceph/ceph:v19, name=zealous_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:33:39 compute-0 systemd[1]: libpod-conmon-1242dbca335ac6e237c0d80e3fd4701a090eb8604983b519c47e97235b22466a.scope: Deactivated successfully.
Oct  9 09:33:39 compute-0 podman[4280]: 2025-10-09 09:33:39.98588275 +0000 UTC m=+0.027904504 container create b4ee90746a2f882d03c77441bb3a70903313305574e46625fe9f1602c8d7ff39 (image=quay.io/ceph/ceph:v19, name=awesome_clarke, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:33:40 compute-0 systemd[1]: Started libpod-conmon-b4ee90746a2f882d03c77441bb3a70903313305574e46625fe9f1602c8d7ff39.scope.
Oct  9 09:33:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7afbe7fd9e114cfcf7d4abdd4bc61815e43c7a860fda2e97b5d62257bab2ae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7afbe7fd9e114cfcf7d4abdd4bc61815e43c7a860fda2e97b5d62257bab2ae3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7afbe7fd9e114cfcf7d4abdd4bc61815e43c7a860fda2e97b5d62257bab2ae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7afbe7fd9e114cfcf7d4abdd4bc61815e43c7a860fda2e97b5d62257bab2ae3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 podman[4280]: 2025-10-09 09:33:40.032258261 +0000 UTC m=+0.074280024 container init b4ee90746a2f882d03c77441bb3a70903313305574e46625fe9f1602c8d7ff39 (image=quay.io/ceph/ceph:v19, name=awesome_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:33:40 compute-0 podman[4280]: 2025-10-09 09:33:40.036661403 +0000 UTC m=+0.078683156 container start b4ee90746a2f882d03c77441bb3a70903313305574e46625fe9f1602c8d7ff39 (image=quay.io/ceph/ceph:v19, name=awesome_clarke, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 09:33:40 compute-0 podman[4280]: 2025-10-09 09:33:40.037787575 +0000 UTC m=+0.079809348 container attach b4ee90746a2f882d03c77441bb3a70903313305574e46625fe9f1602c8d7ff39 (image=quay.io/ceph/ceph:v19, name=awesome_clarke, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:33:40 compute-0 podman[4280]: 2025-10-09 09:33:39.97446914 +0000 UTC m=+0.016490913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:40 compute-0 ceph-mon[4193]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct  9 09:33:40 compute-0 ceph-mon[4193]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2880171403' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 09:33:40 compute-0 ceph-mon[4193]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2880171403' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  9 09:33:40 compute-0 awesome_clarke[4294]: 
Oct  9 09:33:40 compute-0 awesome_clarke[4294]: [global]
Oct  9 09:33:40 compute-0 awesome_clarke[4294]: #011fsid = 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:40 compute-0 awesome_clarke[4294]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  9 09:33:40 compute-0 systemd[1]: libpod-b4ee90746a2f882d03c77441bb3a70903313305574e46625fe9f1602c8d7ff39.scope: Deactivated successfully.
Oct  9 09:33:40 compute-0 podman[4280]: 2025-10-09 09:33:40.191367427 +0000 UTC m=+0.233389180 container died b4ee90746a2f882d03c77441bb3a70903313305574e46625fe9f1602c8d7ff39 (image=quay.io/ceph/ceph:v19, name=awesome_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 09:33:40 compute-0 podman[4280]: 2025-10-09 09:33:40.208670742 +0000 UTC m=+0.250692495 container remove b4ee90746a2f882d03c77441bb3a70903313305574e46625fe9f1602c8d7ff39 (image=quay.io/ceph/ceph:v19, name=awesome_clarke, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 09:33:40 compute-0 systemd[1]: libpod-conmon-b4ee90746a2f882d03c77441bb3a70903313305574e46625fe9f1602c8d7ff39.scope: Deactivated successfully.
Oct  9 09:33:40 compute-0 podman[4330]: 2025-10-09 09:33:40.250793304 +0000 UTC m=+0.026798269 container create e7d47225939b7db615d5bc65c068775e7aa431bd1433f909e7ef363026bff9ae (image=quay.io/ceph/ceph:v19, name=optimistic_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 09:33:40 compute-0 systemd[1]: Started libpod-conmon-e7d47225939b7db615d5bc65c068775e7aa431bd1433f909e7ef363026bff9ae.scope.
Oct  9 09:33:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985888ec6957250349742cd35318acf7b9e67b5394aed6d69fab0bd852e86895/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985888ec6957250349742cd35318acf7b9e67b5394aed6d69fab0bd852e86895/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985888ec6957250349742cd35318acf7b9e67b5394aed6d69fab0bd852e86895/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/985888ec6957250349742cd35318acf7b9e67b5394aed6d69fab0bd852e86895/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 podman[4330]: 2025-10-09 09:33:40.30847386 +0000 UTC m=+0.084478844 container init e7d47225939b7db615d5bc65c068775e7aa431bd1433f909e7ef363026bff9ae (image=quay.io/ceph/ceph:v19, name=optimistic_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:33:40 compute-0 podman[4330]: 2025-10-09 09:33:40.312078235 +0000 UTC m=+0.088083200 container start e7d47225939b7db615d5bc65c068775e7aa431bd1433f909e7ef363026bff9ae (image=quay.io/ceph/ceph:v19, name=optimistic_lichterman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:40 compute-0 podman[4330]: 2025-10-09 09:33:40.313244204 +0000 UTC m=+0.089249168 container attach e7d47225939b7db615d5bc65c068775e7aa431bd1433f909e7ef363026bff9ae (image=quay.io/ceph/ceph:v19, name=optimistic_lichterman, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:40 compute-0 podman[4330]: 2025-10-09 09:33:40.240670637 +0000 UTC m=+0.016675623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:40 compute-0 ceph-mon[4193]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:33:40 compute-0 ceph-mon[4193]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1251320113' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:33:40 compute-0 systemd[1]: libpod-e7d47225939b7db615d5bc65c068775e7aa431bd1433f909e7ef363026bff9ae.scope: Deactivated successfully.
Oct  9 09:33:40 compute-0 conmon[4344]: conmon e7d47225939b7db615d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e7d47225939b7db615d5bc65c068775e7aa431bd1433f909e7ef363026bff9ae.scope/container/memory.events
Oct  9 09:33:40 compute-0 podman[4330]: 2025-10-09 09:33:40.465317885 +0000 UTC m=+0.241322850 container died e7d47225939b7db615d5bc65c068775e7aa431bd1433f909e7ef363026bff9ae (image=quay.io/ceph/ceph:v19, name=optimistic_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 09:33:40 compute-0 podman[4330]: 2025-10-09 09:33:40.485040652 +0000 UTC m=+0.261045617 container remove e7d47225939b7db615d5bc65c068775e7aa431bd1433f909e7ef363026bff9ae (image=quay.io/ceph/ceph:v19, name=optimistic_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:40 compute-0 systemd[1]: libpod-conmon-e7d47225939b7db615d5bc65c068775e7aa431bd1433f909e7ef363026bff9ae.scope: Deactivated successfully.
Oct  9 09:33:40 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:33:40 compute-0 ceph-mon[4193]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  9 09:33:40 compute-0 ceph-mon[4193]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  9 09:33:40 compute-0 ceph-mon[4193]: mon.compute-0@0(leader) e1 shutdown
Oct  9 09:33:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0[4189]: 2025-10-09T09:33:40.611+0000 7fb247b5f640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  9 09:33:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0[4189]: 2025-10-09T09:33:40.611+0000 7fb247b5f640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  9 09:33:40 compute-0 ceph-mon[4193]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  9 09:33:40 compute-0 ceph-mon[4193]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  9 09:33:40 compute-0 podman[4400]: 2025-10-09 09:33:40.663750856 +0000 UTC m=+0.074107158 container died 63a15ed9f3324cb9f1b7e7e825513995921bcfe4f0ca2788677b2f019ebec561 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-92e83b21269cec4a0f3de850701a086a4c0b7722823dff469becf841c831dc9c-merged.mount: Deactivated successfully.
Oct  9 09:33:40 compute-0 podman[4400]: 2025-10-09 09:33:40.680164104 +0000 UTC m=+0.090520405 container remove 63a15ed9f3324cb9f1b7e7e825513995921bcfe4f0ca2788677b2f019ebec561 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:40 compute-0 bash[4400]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0
Oct  9 09:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:33:40 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@mon.compute-0.service: Deactivated successfully.
Oct  9 09:33:40 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:33:40 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:33:40 compute-0 podman[4481]: 2025-10-09 09:33:40.915891262 +0000 UTC m=+0.026736452 container create fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edb69e03594bd9584b2553a34f5c0a5a18e3e11de0a957c9b66aef746560149/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edb69e03594bd9584b2553a34f5c0a5a18e3e11de0a957c9b66aef746560149/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edb69e03594bd9584b2553a34f5c0a5a18e3e11de0a957c9b66aef746560149/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9edb69e03594bd9584b2553a34f5c0a5a18e3e11de0a957c9b66aef746560149/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:40 compute-0 podman[4481]: 2025-10-09 09:33:40.950102006 +0000 UTC m=+0.060947196 container init fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:40 compute-0 podman[4481]: 2025-10-09 09:33:40.955356442 +0000 UTC m=+0.066201622 container start fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:33:40 compute-0 bash[4481]: fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6
Oct  9 09:33:40 compute-0 podman[4481]: 2025-10-09 09:33:40.905098522 +0000 UTC m=+0.015943713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:40 compute-0 systemd[1]: Started Ceph mon.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:33:40 compute-0 ceph-mon[4497]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:33:40 compute-0 ceph-mon[4497]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct  9 09:33:40 compute-0 ceph-mon[4497]: pidfile_write: ignore empty --pid-file
Oct  9 09:33:40 compute-0 ceph-mon[4497]: load: jerasure load: lrc 
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: RocksDB version: 7.9.2
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Git sha 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: DB SUMMARY
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: DB Session ID:  REEUAVY01GI85Z7KU96K
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: CURRENT file:  CURRENT
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 46813 ; 
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                         Options.error_if_exists: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                       Options.create_if_missing: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                                     Options.env: 0x557b3b843c20
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                                Options.info_log: 0x557b3d646e20
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                              Options.statistics: (nil)
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                               Options.use_fsync: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                              Options.db_log_dir: 
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                                 Options.wal_dir: 
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                    Options.write_buffer_manager: 0x557b3d64b900
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                  Options.unordered_write: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                               Options.row_cache: None
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                              Options.wal_filter: None
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.two_write_queues: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.wal_compression: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.atomic_flush: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.max_background_jobs: 2
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.max_background_compactions: -1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.max_subcompactions: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.max_total_wal_size: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                          Options.max_open_files: -1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:       Options.compaction_readahead_size: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Compression algorithms supported:
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: #011kZSTD supported: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: #011kXpressCompression supported: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: #011kZlibCompression supported: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:           Options.merge_operator: 
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:        Options.compaction_filter: None
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557b3d646aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557b3d66b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:        Options.write_buffer_size: 33554432
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:  Options.max_write_buffer_number: 2
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:          Options.compression: NoCompression
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.num_levels: 7
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ba1e7fee-fdf5-47b8-8729-cc5ad901148d
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002420987548, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002420989328, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 46708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 117, "table_properties": {"data_size": 45279, "index_size": 135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2753, "raw_average_key_size": 31, "raw_value_size": 43072, "raw_average_value_size": 489, "num_data_blocks": 7, "num_entries": 88, "num_filter_entries": 88, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002420, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002420989413, "job": 1, "event": "recovery_finished"}
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557b3d66ce00
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: DB pointer 0x557b3d776000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:33:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   47.51 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0   47.51 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 6.35 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 6.35 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557b3d66b350#2 capacity: 512.00 MB usage: 1.70 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.33 KB,6.25849e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 09:33:40 compute-0 ceph-mon[4497]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@-1(???) e1 preinit fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@-1(???).mds e1 new map
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-10-09T09:33:39:705322+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  9 09:33:40 compute-0 ceph-mon[4497]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : monmap epoch 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : last_changed 2025-10-09T09:33:38.201593+0000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : created 2025-10-09T09:33:38.201593+0000
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  9 09:33:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap 
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  9 09:33:40 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  9 09:33:41 compute-0 podman[4498]: 2025-10-09 09:33:41.004666115 +0000 UTC m=+0.030468380 container create 219c71a14580682da429dcf4afef897f08ec361f13248218a608620e33ab57bc (image=quay.io/ceph/ceph:v19, name=sweet_wilbur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 09:33:41 compute-0 systemd[1]: Started libpod-conmon-219c71a14580682da429dcf4afef897f08ec361f13248218a608620e33ab57bc.scope.
Oct  9 09:33:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:41 compute-0 ceph-mon[4497]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  9 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd18437880cb8304ca078b0f4a52df873685d281e118d073e16edd136ac9861/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd18437880cb8304ca078b0f4a52df873685d281e118d073e16edd136ac9861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd18437880cb8304ca078b0f4a52df873685d281e118d073e16edd136ac9861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:41 compute-0 podman[4498]: 2025-10-09 09:33:41.06025661 +0000 UTC m=+0.086058894 container init 219c71a14580682da429dcf4afef897f08ec361f13248218a608620e33ab57bc (image=quay.io/ceph/ceph:v19, name=sweet_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:41 compute-0 podman[4498]: 2025-10-09 09:33:41.064832978 +0000 UTC m=+0.090635232 container start 219c71a14580682da429dcf4afef897f08ec361f13248218a608620e33ab57bc (image=quay.io/ceph/ceph:v19, name=sweet_wilbur, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:33:41 compute-0 podman[4498]: 2025-10-09 09:33:41.065783751 +0000 UTC m=+0.091586015 container attach 219c71a14580682da429dcf4afef897f08ec361f13248218a608620e33ab57bc (image=quay.io/ceph/ceph:v19, name=sweet_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 09:33:41 compute-0 podman[4498]: 2025-10-09 09:33:40.992395018 +0000 UTC m=+0.018197302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Oct  9 09:33:41 compute-0 systemd[1]: libpod-219c71a14580682da429dcf4afef897f08ec361f13248218a608620e33ab57bc.scope: Deactivated successfully.
Oct  9 09:33:41 compute-0 podman[4498]: 2025-10-09 09:33:41.221804104 +0000 UTC m=+0.247606368 container died 219c71a14580682da429dcf4afef897f08ec361f13248218a608620e33ab57bc (image=quay.io/ceph/ceph:v19, name=sweet_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:33:41 compute-0 podman[4498]: 2025-10-09 09:33:41.240388245 +0000 UTC m=+0.266190509 container remove 219c71a14580682da429dcf4afef897f08ec361f13248218a608620e33ab57bc (image=quay.io/ceph/ceph:v19, name=sweet_wilbur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:33:41 compute-0 systemd[1]: libpod-conmon-219c71a14580682da429dcf4afef897f08ec361f13248218a608620e33ab57bc.scope: Deactivated successfully.
Oct  9 09:33:41 compute-0 podman[4585]: 2025-10-09 09:33:41.284503803 +0000 UTC m=+0.027222689 container create 7cb0e7f764201a3dd0a4d08d1d94020f6f7a5b123ed01bd269c25c5e7490b092 (image=quay.io/ceph/ceph:v19, name=pedantic_chaplygin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:41 compute-0 systemd[1]: Started libpod-conmon-7cb0e7f764201a3dd0a4d08d1d94020f6f7a5b123ed01bd269c25c5e7490b092.scope.
Oct  9 09:33:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdba153afac5ef1dea389ae68a19352e889ef34a2ee6fe4244d3d8207d0d0c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdba153afac5ef1dea389ae68a19352e889ef34a2ee6fe4244d3d8207d0d0c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdba153afac5ef1dea389ae68a19352e889ef34a2ee6fe4244d3d8207d0d0c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:41 compute-0 podman[4585]: 2025-10-09 09:33:41.335788329 +0000 UTC m=+0.078507215 container init 7cb0e7f764201a3dd0a4d08d1d94020f6f7a5b123ed01bd269c25c5e7490b092 (image=quay.io/ceph/ceph:v19, name=pedantic_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Oct  9 09:33:41 compute-0 podman[4585]: 2025-10-09 09:33:41.339846491 +0000 UTC m=+0.082565377 container start 7cb0e7f764201a3dd0a4d08d1d94020f6f7a5b123ed01bd269c25c5e7490b092 (image=quay.io/ceph/ceph:v19, name=pedantic_chaplygin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:33:41 compute-0 podman[4585]: 2025-10-09 09:33:41.340914885 +0000 UTC m=+0.083633771 container attach 7cb0e7f764201a3dd0a4d08d1d94020f6f7a5b123ed01bd269c25c5e7490b092 (image=quay.io/ceph/ceph:v19, name=pedantic_chaplygin, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:41 compute-0 podman[4585]: 2025-10-09 09:33:41.274056946 +0000 UTC m=+0.016775842 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Oct  9 09:33:41 compute-0 systemd[1]: libpod-7cb0e7f764201a3dd0a4d08d1d94020f6f7a5b123ed01bd269c25c5e7490b092.scope: Deactivated successfully.
Oct  9 09:33:41 compute-0 conmon[4598]: conmon 7cb0e7f764201a3dd0a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cb0e7f764201a3dd0a4d08d1d94020f6f7a5b123ed01bd269c25c5e7490b092.scope/container/memory.events
Oct  9 09:33:41 compute-0 podman[4585]: 2025-10-09 09:33:41.498663056 +0000 UTC m=+0.241381943 container died 7cb0e7f764201a3dd0a4d08d1d94020f6f7a5b123ed01bd269c25c5e7490b092 (image=quay.io/ceph/ceph:v19, name=pedantic_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bdba153afac5ef1dea389ae68a19352e889ef34a2ee6fe4244d3d8207d0d0c2-merged.mount: Deactivated successfully.
Oct  9 09:33:41 compute-0 podman[4585]: 2025-10-09 09:33:41.525653456 +0000 UTC m=+0.268372341 container remove 7cb0e7f764201a3dd0a4d08d1d94020f6f7a5b123ed01bd269c25c5e7490b092 (image=quay.io/ceph/ceph:v19, name=pedantic_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:33:41 compute-0 systemd[1]: libpod-conmon-7cb0e7f764201a3dd0a4d08d1d94020f6f7a5b123ed01bd269c25c5e7490b092.scope: Deactivated successfully.
Oct  9 09:33:41 compute-0 systemd[1]: Reloading.
Oct  9 09:33:41 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:33:41 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:33:41 compute-0 systemd[1]: Reloading.
Oct  9 09:33:41 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:33:41 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:33:41 compute-0 systemd[1]: Starting Ceph mgr.compute-0.lwqgfy for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:33:42 compute-0 podman[4756]: 2025-10-09 09:33:42.125536783 +0000 UTC m=+0.027269677 container create 0223bd04566f98e01e6b64afbf567fbbda227e51a7ad15be8585036a59812a28 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  9 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf9aeef2a6df7bf50ec93ed2d06e55fbad7b82c7e8d1584c18472d6d96df700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf9aeef2a6df7bf50ec93ed2d06e55fbad7b82c7e8d1584c18472d6d96df700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf9aeef2a6df7bf50ec93ed2d06e55fbad7b82c7e8d1584c18472d6d96df700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf9aeef2a6df7bf50ec93ed2d06e55fbad7b82c7e8d1584c18472d6d96df700/merged/var/lib/ceph/mgr/ceph-compute-0.lwqgfy supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:42 compute-0 podman[4756]: 2025-10-09 09:33:42.165976532 +0000 UTC m=+0.067709446 container init 0223bd04566f98e01e6b64afbf567fbbda227e51a7ad15be8585036a59812a28 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:33:42 compute-0 podman[4756]: 2025-10-09 09:33:42.172028944 +0000 UTC m=+0.073761837 container start 0223bd04566f98e01e6b64afbf567fbbda227e51a7ad15be8585036a59812a28 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:42 compute-0 bash[4756]: 0223bd04566f98e01e6b64afbf567fbbda227e51a7ad15be8585036a59812a28
Oct  9 09:33:42 compute-0 podman[4756]: 2025-10-09 09:33:42.114351714 +0000 UTC m=+0.016084608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:42 compute-0 systemd[1]: Started Ceph mgr.compute-0.lwqgfy for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:33:42 compute-0 ceph-mgr[4772]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:33:42 compute-0 ceph-mgr[4772]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:33:42 compute-0 ceph-mgr[4772]: pidfile_write: ignore empty --pid-file
Oct  9 09:33:42 compute-0 podman[4773]: 2025-10-09 09:33:42.221607192 +0000 UTC m=+0.027860592 container create c4efcee38b138a3661823ac1eda4af00d74b4573bfb6570ed2c479584aa502a3 (image=quay.io/ceph/ceph:v19, name=trusting_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 09:33:42 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'alerts'
Oct  9 09:33:42 compute-0 systemd[1]: Started libpod-conmon-c4efcee38b138a3661823ac1eda4af00d74b4573bfb6570ed2c479584aa502a3.scope.
Oct  9 09:33:42 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0c2f36406b32ff638db1c174509f61914d1058505f594425daffc09deb356a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0c2f36406b32ff638db1c174509f61914d1058505f594425daffc09deb356a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0c2f36406b32ff638db1c174509f61914d1058505f594425daffc09deb356a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:42 compute-0 podman[4773]: 2025-10-09 09:33:42.27901886 +0000 UTC m=+0.085272291 container init c4efcee38b138a3661823ac1eda4af00d74b4573bfb6570ed2c479584aa502a3 (image=quay.io/ceph/ceph:v19, name=trusting_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:33:42 compute-0 podman[4773]: 2025-10-09 09:33:42.283682413 +0000 UTC m=+0.089935823 container start c4efcee38b138a3661823ac1eda4af00d74b4573bfb6570ed2c479584aa502a3 (image=quay.io/ceph/ceph:v19, name=trusting_engelbart, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:33:42 compute-0 podman[4773]: 2025-10-09 09:33:42.286493885 +0000 UTC m=+0.092747295 container attach c4efcee38b138a3661823ac1eda4af00d74b4573bfb6570ed2c479584aa502a3 (image=quay.io/ceph/ceph:v19, name=trusting_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:33:42 compute-0 podman[4773]: 2025-10-09 09:33:42.211079092 +0000 UTC m=+0.017332532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:42 compute-0 ceph-mgr[4772]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:33:42 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'balancer'
Oct  9 09:33:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:42.322+0000 7f60cc529140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:33:42 compute-0 ceph-mgr[4772]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:33:42 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'cephadm'
Oct  9 09:33:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:42.394+0000 7f60cc529140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:33:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 09:33:42 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/711019364' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]: 
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]: {
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "health": {
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "status": "HEALTH_OK",
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "checks": {},
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "mutes": []
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    },
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "election_epoch": 5,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "quorum": [
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        0
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    ],
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "quorum_names": [
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "compute-0"
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    ],
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "quorum_age": 1,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "monmap": {
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "epoch": 1,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "min_mon_release_name": "squid",
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "num_mons": 1
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    },
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "osdmap": {
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "epoch": 1,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "num_osds": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "num_up_osds": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "osd_up_since": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "num_in_osds": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "osd_in_since": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "num_remapped_pgs": 0
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    },
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "pgmap": {
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "pgs_by_state": [],
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "num_pgs": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "num_pools": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "num_objects": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "data_bytes": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "bytes_used": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "bytes_avail": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "bytes_total": 0
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    },
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "fsmap": {
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "epoch": 1,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "btime": "2025-10-09T09:33:39:705322+0000",
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "by_rank": [],
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "up:standby": 0
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    },
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "mgrmap": {
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "available": false,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "num_standbys": 0,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "modules": [
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:            "iostat",
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:            "nfs",
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:            "restful"
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        ],
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "services": {}
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    },
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "servicemap": {
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "epoch": 1,
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "modified": "2025-10-09T09:33:39.706205+0000",
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:        "services": {}
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    },
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]:    "progress_events": {}
Oct  9 09:33:42 compute-0 trusting_engelbart[4807]: }
Oct  9 09:33:42 compute-0 systemd[1]: libpod-c4efcee38b138a3661823ac1eda4af00d74b4573bfb6570ed2c479584aa502a3.scope: Deactivated successfully.
Oct  9 09:33:42 compute-0 conmon[4807]: conmon c4efcee38b138a366182 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4efcee38b138a3661823ac1eda4af00d74b4573bfb6570ed2c479584aa502a3.scope/container/memory.events
Oct  9 09:33:42 compute-0 podman[4773]: 2025-10-09 09:33:42.445184242 +0000 UTC m=+0.251437652 container died c4efcee38b138a3661823ac1eda4af00d74b4573bfb6570ed2c479584aa502a3 (image=quay.io/ceph/ceph:v19, name=trusting_engelbart, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:33:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e0c2f36406b32ff638db1c174509f61914d1058505f594425daffc09deb356a-merged.mount: Deactivated successfully.
Oct  9 09:33:42 compute-0 podman[4773]: 2025-10-09 09:33:42.467071981 +0000 UTC m=+0.273325391 container remove c4efcee38b138a3661823ac1eda4af00d74b4573bfb6570ed2c479584aa502a3 (image=quay.io/ceph/ceph:v19, name=trusting_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:33:42 compute-0 systemd[1]: libpod-conmon-c4efcee38b138a3661823ac1eda4af00d74b4573bfb6570ed2c479584aa502a3.scope: Deactivated successfully.
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'crash'
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'dashboard'
Oct  9 09:33:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:43.076+0000 7f60cc529140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:33:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:43.619+0000 7f60cc529140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:33:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:33:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:33:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  from numpy import show_config as show_numpy_config
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:33:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:43.762+0000 7f60cc529140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'influx'
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'insights'
Oct  9 09:33:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:43.823+0000 7f60cc529140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'iostat'
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:33:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:33:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:43.941+0000 7f60cc529140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:33:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'localpool'
Oct  9 09:33:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:33:44 compute-0 podman[4854]: 2025-10-09 09:33:44.511053625 +0000 UTC m=+0.026385892 container create 524e47a79361a0de519c2af9b10186228f6d72757d9f774bbbd8a7061fd260ce (image=quay.io/ceph/ceph:v19, name=nice_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:33:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mirroring'
Oct  9 09:33:44 compute-0 systemd[1]: Started libpod-conmon-524e47a79361a0de519c2af9b10186228f6d72757d9f774bbbd8a7061fd260ce.scope.
Oct  9 09:33:44 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e350db4191f3bb6dfbf23b136625a7e9271522ccc2cb94564fb057805bc56c0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e350db4191f3bb6dfbf23b136625a7e9271522ccc2cb94564fb057805bc56c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e350db4191f3bb6dfbf23b136625a7e9271522ccc2cb94564fb057805bc56c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:44 compute-0 podman[4854]: 2025-10-09 09:33:44.566284812 +0000 UTC m=+0.081617078 container init 524e47a79361a0de519c2af9b10186228f6d72757d9f774bbbd8a7061fd260ce (image=quay.io/ceph/ceph:v19, name=nice_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:33:44 compute-0 podman[4854]: 2025-10-09 09:33:44.57034655 +0000 UTC m=+0.085678817 container start 524e47a79361a0de519c2af9b10186228f6d72757d9f774bbbd8a7061fd260ce (image=quay.io/ceph/ceph:v19, name=nice_germain, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:33:44 compute-0 podman[4854]: 2025-10-09 09:33:44.57154994 +0000 UTC m=+0.086882206 container attach 524e47a79361a0de519c2af9b10186228f6d72757d9f774bbbd8a7061fd260ce (image=quay.io/ceph/ceph:v19, name=nice_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 09:33:44 compute-0 podman[4854]: 2025-10-09 09:33:44.500054326 +0000 UTC m=+0.015386602 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'nfs'
Oct  9 09:33:44 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 09:33:44 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2443027308' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 09:33:44 compute-0 nice_germain[4867]: 
Oct  9 09:33:44 compute-0 nice_germain[4867]: {
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "health": {
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "status": "HEALTH_OK",
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "checks": {},
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "mutes": []
Oct  9 09:33:44 compute-0 nice_germain[4867]:    },
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "election_epoch": 5,
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "quorum": [
Oct  9 09:33:44 compute-0 nice_germain[4867]:        0
Oct  9 09:33:44 compute-0 nice_germain[4867]:    ],
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "quorum_names": [
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "compute-0"
Oct  9 09:33:44 compute-0 nice_germain[4867]:    ],
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "quorum_age": 3,
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "monmap": {
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "epoch": 1,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "min_mon_release_name": "squid",
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "num_mons": 1
Oct  9 09:33:44 compute-0 nice_germain[4867]:    },
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "osdmap": {
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "epoch": 1,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "num_osds": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "num_up_osds": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "osd_up_since": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "num_in_osds": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "osd_in_since": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "num_remapped_pgs": 0
Oct  9 09:33:44 compute-0 nice_germain[4867]:    },
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "pgmap": {
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "pgs_by_state": [],
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "num_pgs": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "num_pools": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "num_objects": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "data_bytes": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "bytes_used": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "bytes_avail": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "bytes_total": 0
Oct  9 09:33:44 compute-0 nice_germain[4867]:    },
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "fsmap": {
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "epoch": 1,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "btime": "2025-10-09T09:33:39:705322+0000",
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "by_rank": [],
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "up:standby": 0
Oct  9 09:33:44 compute-0 nice_germain[4867]:    },
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "mgrmap": {
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "available": false,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "num_standbys": 0,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "modules": [
Oct  9 09:33:44 compute-0 nice_germain[4867]:            "iostat",
Oct  9 09:33:44 compute-0 nice_germain[4867]:            "nfs",
Oct  9 09:33:44 compute-0 nice_germain[4867]:            "restful"
Oct  9 09:33:44 compute-0 nice_germain[4867]:        ],
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "services": {}
Oct  9 09:33:44 compute-0 nice_germain[4867]:    },
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "servicemap": {
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "epoch": 1,
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "modified": "2025-10-09T09:33:39.706205+0000",
Oct  9 09:33:44 compute-0 nice_germain[4867]:        "services": {}
Oct  9 09:33:44 compute-0 nice_germain[4867]:    },
Oct  9 09:33:44 compute-0 nice_germain[4867]:    "progress_events": {}
Oct  9 09:33:44 compute-0 nice_germain[4867]: }
Oct  9 09:33:44 compute-0 systemd[1]: libpod-524e47a79361a0de519c2af9b10186228f6d72757d9f774bbbd8a7061fd260ce.scope: Deactivated successfully.
Oct  9 09:33:44 compute-0 podman[4893]: 2025-10-09 09:33:44.751185689 +0000 UTC m=+0.016333206 container died 524e47a79361a0de519c2af9b10186228f6d72757d9f774bbbd8a7061fd260ce (image=quay.io/ceph/ceph:v19, name=nice_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:33:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e350db4191f3bb6dfbf23b136625a7e9271522ccc2cb94564fb057805bc56c0-merged.mount: Deactivated successfully.
Oct  9 09:33:44 compute-0 podman[4893]: 2025-10-09 09:33:44.769595911 +0000 UTC m=+0.034743429 container remove 524e47a79361a0de519c2af9b10186228f6d72757d9f774bbbd8a7061fd260ce (image=quay.io/ceph/ceph:v19, name=nice_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 09:33:44 compute-0 systemd[1]: libpod-conmon-524e47a79361a0de519c2af9b10186228f6d72757d9f774bbbd8a7061fd260ce.scope: Deactivated successfully.
Oct  9 09:33:44 compute-0 ceph-mgr[4772]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:33:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:33:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:44.807+0000 7f60cc529140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:33:44 compute-0 ceph-mgr[4772]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:33:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:33:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:44.995+0000 7f60cc529140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_support'
Oct  9 09:33:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:45.061+0000 7f60cc529140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:33:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:45.120+0000 7f60cc529140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'progress'
Oct  9 09:33:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:45.188+0000 7f60cc529140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'prometheus'
Oct  9 09:33:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:45.250+0000 7f60cc529140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:33:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:45.552+0000 7f60cc529140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'restful'
Oct  9 09:33:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:45.638+0000 7f60cc529140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:33:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rgw'
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rook'
Oct  9 09:33:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:46.018+0000 7f60cc529140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'selftest'
Oct  9 09:33:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:46.499+0000 7f60cc529140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:33:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:46.562+0000 7f60cc529140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'stats'
Oct  9 09:33:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:46.631+0000 7f60cc529140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'status'
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telegraf'
Oct  9 09:33:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:46.760+0000 7f60cc529140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 podman[4905]: 2025-10-09 09:33:46.819767958 +0000 UTC m=+0.029136377 container create a4052ee3d0a88f8e3b0b4409d3f5b44e72aea5f95417f3702a373d63ab50cd7c (image=quay.io/ceph/ceph:v19, name=eager_villani, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telemetry'
Oct  9 09:33:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:46.821+0000 7f60cc529140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 systemd[1]: Started libpod-conmon-a4052ee3d0a88f8e3b0b4409d3f5b44e72aea5f95417f3702a373d63ab50cd7c.scope.
Oct  9 09:33:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeb370b2c75b9d86168b5684599f5e31fc37a153df86503b36d8422e9171d61/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeb370b2c75b9d86168b5684599f5e31fc37a153df86503b36d8422e9171d61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeb370b2c75b9d86168b5684599f5e31fc37a153df86503b36d8422e9171d61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:46 compute-0 podman[4905]: 2025-10-09 09:33:46.867706305 +0000 UTC m=+0.077074723 container init a4052ee3d0a88f8e3b0b4409d3f5b44e72aea5f95417f3702a373d63ab50cd7c (image=quay.io/ceph/ceph:v19, name=eager_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:33:46 compute-0 podman[4905]: 2025-10-09 09:33:46.879352183 +0000 UTC m=+0.088720601 container start a4052ee3d0a88f8e3b0b4409d3f5b44e72aea5f95417f3702a373d63ab50cd7c (image=quay.io/ceph/ceph:v19, name=eager_villani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 09:33:46 compute-0 podman[4905]: 2025-10-09 09:33:46.880590457 +0000 UTC m=+0.089958876 container attach a4052ee3d0a88f8e3b0b4409d3f5b44e72aea5f95417f3702a373d63ab50cd7c (image=quay.io/ceph/ceph:v19, name=eager_villani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:33:46 compute-0 podman[4905]: 2025-10-09 09:33:46.808399864 +0000 UTC m=+0.017768293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:33:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:33:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:46.957+0000 7f60cc529140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494335100' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 09:33:47 compute-0 eager_villani[4918]: 
Oct  9 09:33:47 compute-0 eager_villani[4918]: {
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "health": {
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "status": "HEALTH_OK",
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "checks": {},
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "mutes": []
Oct  9 09:33:47 compute-0 eager_villani[4918]:    },
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "election_epoch": 5,
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "quorum": [
Oct  9 09:33:47 compute-0 eager_villani[4918]:        0
Oct  9 09:33:47 compute-0 eager_villani[4918]:    ],
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "quorum_names": [
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "compute-0"
Oct  9 09:33:47 compute-0 eager_villani[4918]:    ],
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "quorum_age": 6,
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "monmap": {
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "epoch": 1,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "min_mon_release_name": "squid",
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "num_mons": 1
Oct  9 09:33:47 compute-0 eager_villani[4918]:    },
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "osdmap": {
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "epoch": 1,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "num_osds": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "num_up_osds": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "osd_up_since": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "num_in_osds": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "osd_in_since": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "num_remapped_pgs": 0
Oct  9 09:33:47 compute-0 eager_villani[4918]:    },
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "pgmap": {
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "pgs_by_state": [],
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "num_pgs": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "num_pools": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "num_objects": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "data_bytes": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "bytes_used": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "bytes_avail": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "bytes_total": 0
Oct  9 09:33:47 compute-0 eager_villani[4918]:    },
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "fsmap": {
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "epoch": 1,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "btime": "2025-10-09T09:33:39:705322+0000",
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "by_rank": [],
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "up:standby": 0
Oct  9 09:33:47 compute-0 eager_villani[4918]:    },
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "mgrmap": {
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "available": false,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "num_standbys": 0,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "modules": [
Oct  9 09:33:47 compute-0 eager_villani[4918]:            "iostat",
Oct  9 09:33:47 compute-0 eager_villani[4918]:            "nfs",
Oct  9 09:33:47 compute-0 eager_villani[4918]:            "restful"
Oct  9 09:33:47 compute-0 eager_villani[4918]:        ],
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "services": {}
Oct  9 09:33:47 compute-0 eager_villani[4918]:    },
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "servicemap": {
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "epoch": 1,
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "modified": "2025-10-09T09:33:39.706205+0000",
Oct  9 09:33:47 compute-0 eager_villani[4918]:        "services": {}
Oct  9 09:33:47 compute-0 eager_villani[4918]:    },
Oct  9 09:33:47 compute-0 eager_villani[4918]:    "progress_events": {}
Oct  9 09:33:47 compute-0 eager_villani[4918]: }
Oct  9 09:33:47 compute-0 systemd[1]: libpod-a4052ee3d0a88f8e3b0b4409d3f5b44e72aea5f95417f3702a373d63ab50cd7c.scope: Deactivated successfully.
Oct  9 09:33:47 compute-0 podman[4905]: 2025-10-09 09:33:47.035578214 +0000 UTC m=+0.244946633 container died a4052ee3d0a88f8e3b0b4409d3f5b44e72aea5f95417f3702a373d63ab50cd7c (image=quay.io/ceph/ceph:v19, name=eager_villani, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:33:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aeb370b2c75b9d86168b5684599f5e31fc37a153df86503b36d8422e9171d61-merged.mount: Deactivated successfully.
Oct  9 09:33:47 compute-0 podman[4905]: 2025-10-09 09:33:47.053975642 +0000 UTC m=+0.263344061 container remove a4052ee3d0a88f8e3b0b4409d3f5b44e72aea5f95417f3702a373d63ab50cd7c (image=quay.io/ceph/ceph:v19, name=eager_villani, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:47 compute-0 systemd[1]: libpod-conmon-a4052ee3d0a88f8e3b0b4409d3f5b44e72aea5f95417f3702a373d63ab50cd7c.scope: Deactivated successfully.
Oct  9 09:33:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:47.149+0000 7f60cc529140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'volumes'
Oct  9 09:33:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:47.380+0000 7f60cc529140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'zabbix'
Oct  9 09:33:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:47.442+0000 7f60cc529140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: ms_deliver_dispatch: unhandled message 0x55cb6b40c9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.lwqgfy
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr handle_mgr_map Activating!
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr handle_mgr_map I am now activating
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.lwqgfy(active, starting, since 0.00447508s)
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e1 all = 1
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"} v 0)
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: balancer
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [balancer INFO root] Starting
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:33:47
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Manager daemon compute-0.lwqgfy is now available
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [balancer INFO root] No pools available
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: crash
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: devicehealth
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: iostat
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Starting
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: nfs
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: orchestrator
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: pg_autoscaler
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: progress
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [progress INFO root] Loading...
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [progress INFO root] No stored events to load
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [progress INFO root] Loaded [] historic events
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [progress INFO root] Loaded OSDMap, ready.
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [rbd_support INFO root] recovery thread starting
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [rbd_support INFO root] starting setup
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: rbd_support
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: restful
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: status
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"} v 0)
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: telemetry
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [restful INFO root] server_addr: :: server_port: 8003
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [restful WARNING root] server not running: no certificate configured
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [rbd_support INFO root] PerfHandler: starting
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TaskHandler: starting
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"} v 0)
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: [rbd_support INFO root] setup complete
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Oct  9 09:33:47 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:47 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: volumes
Oct  9 09:33:48 compute-0 ceph-mon[4497]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:33:48 compute-0 ceph-mon[4497]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:33:48 compute-0 ceph-mon[4497]: from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:33:48 compute-0 ceph-mon[4497]: from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:48 compute-0 ceph-mon[4497]: from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:33:48 compute-0 ceph-mon[4497]: from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:48 compute-0 ceph-mon[4497]: from='mgr.14102 192.168.122.100:0/1061979859' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:48 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.lwqgfy(active, since 1.00934s)
Oct  9 09:33:49 compute-0 podman[5035]: 2025-10-09 09:33:49.098376146 +0000 UTC m=+0.026139537 container create 58c419a4daa01d226ae5db7a0ec8d93000d0bddc579b36fc0515c76327fe11e3 (image=quay.io/ceph/ceph:v19, name=amazing_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 09:33:49 compute-0 systemd[1]: Started libpod-conmon-58c419a4daa01d226ae5db7a0ec8d93000d0bddc579b36fc0515c76327fe11e3.scope.
Oct  9 09:33:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0210c660a3cf2047af9b304cb25b3f00cb4efd2de7da5b11f76d20d748441403/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0210c660a3cf2047af9b304cb25b3f00cb4efd2de7da5b11f76d20d748441403/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0210c660a3cf2047af9b304cb25b3f00cb4efd2de7da5b11f76d20d748441403/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:49 compute-0 podman[5035]: 2025-10-09 09:33:49.146465418 +0000 UTC m=+0.074228808 container init 58c419a4daa01d226ae5db7a0ec8d93000d0bddc579b36fc0515c76327fe11e3 (image=quay.io/ceph/ceph:v19, name=amazing_merkle, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:33:49 compute-0 podman[5035]: 2025-10-09 09:33:49.150302752 +0000 UTC m=+0.078066144 container start 58c419a4daa01d226ae5db7a0ec8d93000d0bddc579b36fc0515c76327fe11e3 (image=quay.io/ceph/ceph:v19, name=amazing_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:33:49 compute-0 podman[5035]: 2025-10-09 09:33:49.155627872 +0000 UTC m=+0.083391274 container attach 58c419a4daa01d226ae5db7a0ec8d93000d0bddc579b36fc0515c76327fe11e3 (image=quay.io/ceph/ceph:v19, name=amazing_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:33:49 compute-0 podman[5035]: 2025-10-09 09:33:49.08747388 +0000 UTC m=+0.015237291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:49 compute-0 ceph-mgr[4772]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 09:33:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.lwqgfy(active, since 2s)
Oct  9 09:33:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 09:33:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113225209' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 09:33:49 compute-0 amazing_merkle[5048]: 
Oct  9 09:33:49 compute-0 amazing_merkle[5048]: {
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "health": {
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "status": "HEALTH_OK",
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "checks": {},
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "mutes": []
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    },
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "election_epoch": 5,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "quorum": [
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        0
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    ],
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "quorum_names": [
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "compute-0"
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    ],
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "quorum_age": 8,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "monmap": {
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "epoch": 1,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "min_mon_release_name": "squid",
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "num_mons": 1
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    },
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "osdmap": {
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "epoch": 1,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "num_osds": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "num_up_osds": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "osd_up_since": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "num_in_osds": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "osd_in_since": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "num_remapped_pgs": 0
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    },
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "pgmap": {
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "pgs_by_state": [],
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "num_pgs": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "num_pools": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "num_objects": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "data_bytes": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "bytes_used": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "bytes_avail": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "bytes_total": 0
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    },
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "fsmap": {
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "epoch": 1,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "btime": "2025-10-09T09:33:39:705322+0000",
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "by_rank": [],
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "up:standby": 0
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    },
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "mgrmap": {
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "available": true,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "num_standbys": 0,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "modules": [
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:            "iostat",
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:            "nfs",
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:            "restful"
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        ],
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "services": {}
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    },
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "servicemap": {
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "epoch": 1,
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "modified": "2025-10-09T09:33:39.706205+0000",
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:        "services": {}
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    },
Oct  9 09:33:49 compute-0 amazing_merkle[5048]:    "progress_events": {}
Oct  9 09:33:49 compute-0 amazing_merkle[5048]: }
Oct  9 09:33:49 compute-0 systemd[1]: libpod-58c419a4daa01d226ae5db7a0ec8d93000d0bddc579b36fc0515c76327fe11e3.scope: Deactivated successfully.
Oct  9 09:33:49 compute-0 podman[5035]: 2025-10-09 09:33:49.476319209 +0000 UTC m=+0.404082600 container died 58c419a4daa01d226ae5db7a0ec8d93000d0bddc579b36fc0515c76327fe11e3 (image=quay.io/ceph/ceph:v19, name=amazing_merkle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0210c660a3cf2047af9b304cb25b3f00cb4efd2de7da5b11f76d20d748441403-merged.mount: Deactivated successfully.
Oct  9 09:33:49 compute-0 podman[5035]: 2025-10-09 09:33:49.492881627 +0000 UTC m=+0.420645019 container remove 58c419a4daa01d226ae5db7a0ec8d93000d0bddc579b36fc0515c76327fe11e3 (image=quay.io/ceph/ceph:v19, name=amazing_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:49 compute-0 systemd[1]: libpod-conmon-58c419a4daa01d226ae5db7a0ec8d93000d0bddc579b36fc0515c76327fe11e3.scope: Deactivated successfully.
Oct  9 09:33:49 compute-0 podman[5084]: 2025-10-09 09:33:49.534247593 +0000 UTC m=+0.025869408 container create 12a47c7a89a878933c985965945644b1aae07dbed7e9e78af4afcc9a91d7c42b (image=quay.io/ceph/ceph:v19, name=focused_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct  9 09:33:49 compute-0 systemd[1]: Started libpod-conmon-12a47c7a89a878933c985965945644b1aae07dbed7e9e78af4afcc9a91d7c42b.scope.
Oct  9 09:33:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a25be3995c4cb1b75300a5b3109256016d202beab289ae0639f3f06de5713f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a25be3995c4cb1b75300a5b3109256016d202beab289ae0639f3f06de5713f2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a25be3995c4cb1b75300a5b3109256016d202beab289ae0639f3f06de5713f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a25be3995c4cb1b75300a5b3109256016d202beab289ae0639f3f06de5713f2/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:49 compute-0 podman[5084]: 2025-10-09 09:33:49.570535313 +0000 UTC m=+0.062157147 container init 12a47c7a89a878933c985965945644b1aae07dbed7e9e78af4afcc9a91d7c42b (image=quay.io/ceph/ceph:v19, name=focused_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:33:49 compute-0 podman[5084]: 2025-10-09 09:33:49.578261359 +0000 UTC m=+0.069883174 container start 12a47c7a89a878933c985965945644b1aae07dbed7e9e78af4afcc9a91d7c42b (image=quay.io/ceph/ceph:v19, name=focused_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:33:49 compute-0 podman[5084]: 2025-10-09 09:33:49.579469127 +0000 UTC m=+0.071090942 container attach 12a47c7a89a878933c985965945644b1aae07dbed7e9e78af4afcc9a91d7c42b (image=quay.io/ceph/ceph:v19, name=focused_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:33:49 compute-0 podman[5084]: 2025-10-09 09:33:49.523381294 +0000 UTC m=+0.015003129 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct  9 09:33:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3036270829' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 09:33:49 compute-0 focused_haslett[5097]: 
Oct  9 09:33:49 compute-0 focused_haslett[5097]: [global]
Oct  9 09:33:49 compute-0 focused_haslett[5097]: #011fsid = 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:33:49 compute-0 focused_haslett[5097]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  9 09:33:49 compute-0 systemd[1]: libpod-12a47c7a89a878933c985965945644b1aae07dbed7e9e78af4afcc9a91d7c42b.scope: Deactivated successfully.
Oct  9 09:33:49 compute-0 podman[5084]: 2025-10-09 09:33:49.84148897 +0000 UTC m=+0.333110795 container died 12a47c7a89a878933c985965945644b1aae07dbed7e9e78af4afcc9a91d7c42b (image=quay.io/ceph/ceph:v19, name=focused_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a25be3995c4cb1b75300a5b3109256016d202beab289ae0639f3f06de5713f2-merged.mount: Deactivated successfully.
Oct  9 09:33:49 compute-0 podman[5084]: 2025-10-09 09:33:49.857971076 +0000 UTC m=+0.349592891 container remove 12a47c7a89a878933c985965945644b1aae07dbed7e9e78af4afcc9a91d7c42b (image=quay.io/ceph/ceph:v19, name=focused_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  9 09:33:49 compute-0 systemd[1]: libpod-conmon-12a47c7a89a878933c985965945644b1aae07dbed7e9e78af4afcc9a91d7c42b.scope: Deactivated successfully.
Oct  9 09:33:49 compute-0 podman[5130]: 2025-10-09 09:33:49.899553658 +0000 UTC m=+0.027185848 container create 8f82f15de5f18854679661ae1d9136c174e9b3d514eb5f274c8a88f0bd4fd1da (image=quay.io/ceph/ceph:v19, name=admiring_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:33:49 compute-0 systemd[1]: Started libpod-conmon-8f82f15de5f18854679661ae1d9136c174e9b3d514eb5f274c8a88f0bd4fd1da.scope.
Oct  9 09:33:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64df57c30c150eeede8df3c1c88121e133c884e1a6eeb1576f6ef316e14214/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64df57c30c150eeede8df3c1c88121e133c884e1a6eeb1576f6ef316e14214/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f64df57c30c150eeede8df3c1c88121e133c884e1a6eeb1576f6ef316e14214/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:49 compute-0 podman[5130]: 2025-10-09 09:33:49.950737897 +0000 UTC m=+0.078370097 container init 8f82f15de5f18854679661ae1d9136c174e9b3d514eb5f274c8a88f0bd4fd1da (image=quay.io/ceph/ceph:v19, name=admiring_bose, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 09:33:49 compute-0 podman[5130]: 2025-10-09 09:33:49.954396876 +0000 UTC m=+0.082029055 container start 8f82f15de5f18854679661ae1d9136c174e9b3d514eb5f274c8a88f0bd4fd1da (image=quay.io/ceph/ceph:v19, name=admiring_bose, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:49 compute-0 podman[5130]: 2025-10-09 09:33:49.955470449 +0000 UTC m=+0.083102629 container attach 8f82f15de5f18854679661ae1d9136c174e9b3d514eb5f274c8a88f0bd4fd1da (image=quay.io/ceph/ceph:v19, name=admiring_bose, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:49 compute-0 podman[5130]: 2025-10-09 09:33:49.887896379 +0000 UTC m=+0.015528579 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Oct  9 09:33:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3444282531' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  9 09:33:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3444282531' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  1: '-n'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  2: 'mgr.compute-0.lwqgfy'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  3: '-f'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  4: '--setuser'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  5: 'ceph'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  6: '--setgroup'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  7: 'ceph'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr respawn  exe_path /proc/self/exe
Oct  9 09:33:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.lwqgfy(active, since 3s)
Oct  9 09:33:50 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3036270829' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 09:33:50 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3444282531' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  9 09:33:50 compute-0 systemd[1]: libpod-8f82f15de5f18854679661ae1d9136c174e9b3d514eb5f274c8a88f0bd4fd1da.scope: Deactivated successfully.
Oct  9 09:33:50 compute-0 podman[5130]: 2025-10-09 09:33:50.473982223 +0000 UTC m=+0.601614413 container died 8f82f15de5f18854679661ae1d9136c174e9b3d514eb5f274c8a88f0bd4fd1da (image=quay.io/ceph/ceph:v19, name=admiring_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 09:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f64df57c30c150eeede8df3c1c88121e133c884e1a6eeb1576f6ef316e14214-merged.mount: Deactivated successfully.
Oct  9 09:33:50 compute-0 podman[5130]: 2025-10-09 09:33:50.495072628 +0000 UTC m=+0.622704808 container remove 8f82f15de5f18854679661ae1d9136c174e9b3d514eb5f274c8a88f0bd4fd1da (image=quay.io/ceph/ceph:v19, name=admiring_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 09:33:50 compute-0 systemd[1]: libpod-conmon-8f82f15de5f18854679661ae1d9136c174e9b3d514eb5f274c8a88f0bd4fd1da.scope: Deactivated successfully.
Oct  9 09:33:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ignoring --setuser ceph since I am not root
Oct  9 09:33:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ignoring --setgroup ceph since I am not root
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: pidfile_write: ignore empty --pid-file
Oct  9 09:33:50 compute-0 podman[5183]: 2025-10-09 09:33:50.544385587 +0000 UTC m=+0.028754627 container create 4225728afdb06d2bd9497cd36fd0a4c7ff087e5422986a67ea4493a96d1543a8 (image=quay.io/ceph/ceph:v19, name=zealous_curie, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'alerts'
Oct  9 09:33:50 compute-0 systemd[1]: Started libpod-conmon-4225728afdb06d2bd9497cd36fd0a4c7ff087e5422986a67ea4493a96d1543a8.scope.
Oct  9 09:33:50 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f56cb068f982daf8801135996bbf000ed4c9600760a689e8dea6a827953f198/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f56cb068f982daf8801135996bbf000ed4c9600760a689e8dea6a827953f198/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f56cb068f982daf8801135996bbf000ed4c9600760a689e8dea6a827953f198/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:50 compute-0 podman[5183]: 2025-10-09 09:33:50.590632153 +0000 UTC m=+0.075001194 container init 4225728afdb06d2bd9497cd36fd0a4c7ff087e5422986a67ea4493a96d1543a8 (image=quay.io/ceph/ceph:v19, name=zealous_curie, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 09:33:50 compute-0 podman[5183]: 2025-10-09 09:33:50.594646953 +0000 UTC m=+0.079015993 container start 4225728afdb06d2bd9497cd36fd0a4c7ff087e5422986a67ea4493a96d1543a8 (image=quay.io/ceph/ceph:v19, name=zealous_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:33:50 compute-0 podman[5183]: 2025-10-09 09:33:50.595842147 +0000 UTC m=+0.080211187 container attach 4225728afdb06d2bd9497cd36fd0a4c7ff087e5422986a67ea4493a96d1543a8 (image=quay.io/ceph/ceph:v19, name=zealous_curie, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:33:50 compute-0 podman[5183]: 2025-10-09 09:33:50.531548282 +0000 UTC m=+0.015917342 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:33:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:50.642+0000 7f4e8db26140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'balancer'
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:33:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:50.715+0000 7f4e8db26140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:33:50 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'cephadm'
Oct  9 09:33:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  9 09:33:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2209027881' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  9 09:33:50 compute-0 zealous_curie[5217]: {
Oct  9 09:33:50 compute-0 zealous_curie[5217]:    "epoch": 5,
Oct  9 09:33:50 compute-0 zealous_curie[5217]:    "available": true,
Oct  9 09:33:50 compute-0 zealous_curie[5217]:    "active_name": "compute-0.lwqgfy",
Oct  9 09:33:50 compute-0 zealous_curie[5217]:    "num_standby": 0
Oct  9 09:33:50 compute-0 zealous_curie[5217]: }
Oct  9 09:33:50 compute-0 systemd[1]: libpod-4225728afdb06d2bd9497cd36fd0a4c7ff087e5422986a67ea4493a96d1543a8.scope: Deactivated successfully.
Oct  9 09:33:50 compute-0 conmon[5217]: conmon 4225728afdb06d2bd949 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4225728afdb06d2bd9497cd36fd0a4c7ff087e5422986a67ea4493a96d1543a8.scope/container/memory.events
Oct  9 09:33:50 compute-0 podman[5183]: 2025-10-09 09:33:50.906346055 +0000 UTC m=+0.390715105 container died 4225728afdb06d2bd9497cd36fd0a4c7ff087e5422986a67ea4493a96d1543a8 (image=quay.io/ceph/ceph:v19, name=zealous_curie, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f56cb068f982daf8801135996bbf000ed4c9600760a689e8dea6a827953f198-merged.mount: Deactivated successfully.
Oct  9 09:33:50 compute-0 podman[5183]: 2025-10-09 09:33:50.925947824 +0000 UTC m=+0.410316864 container remove 4225728afdb06d2bd9497cd36fd0a4c7ff087e5422986a67ea4493a96d1543a8 (image=quay.io/ceph/ceph:v19, name=zealous_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 09:33:50 compute-0 systemd[1]: libpod-conmon-4225728afdb06d2bd9497cd36fd0a4c7ff087e5422986a67ea4493a96d1543a8.scope: Deactivated successfully.
Oct  9 09:33:50 compute-0 podman[5251]: 2025-10-09 09:33:50.966723236 +0000 UTC m=+0.026379840 container create ac7c0f3f05637558ec5d4ba454b8215dc5f746bd730b1da2cddf3e279f0794e1 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:33:50 compute-0 systemd[1]: Started libpod-conmon-ac7c0f3f05637558ec5d4ba454b8215dc5f746bd730b1da2cddf3e279f0794e1.scope.
Oct  9 09:33:51 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247ea902f350b4f25c2a697ec97110e12c4d182a160338d480239ebdeb8938b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247ea902f350b4f25c2a697ec97110e12c4d182a160338d480239ebdeb8938b7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/247ea902f350b4f25c2a697ec97110e12c4d182a160338d480239ebdeb8938b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:51 compute-0 podman[5251]: 2025-10-09 09:33:51.017001193 +0000 UTC m=+0.076657808 container init ac7c0f3f05637558ec5d4ba454b8215dc5f746bd730b1da2cddf3e279f0794e1 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:51 compute-0 podman[5251]: 2025-10-09 09:33:51.021466944 +0000 UTC m=+0.081123549 container start ac7c0f3f05637558ec5d4ba454b8215dc5f746bd730b1da2cddf3e279f0794e1 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:33:51 compute-0 podman[5251]: 2025-10-09 09:33:51.024172926 +0000 UTC m=+0.083829550 container attach ac7c0f3f05637558ec5d4ba454b8215dc5f746bd730b1da2cddf3e279f0794e1 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 09:33:51 compute-0 podman[5251]: 2025-10-09 09:33:50.955990048 +0000 UTC m=+0.015646673 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:51 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'crash'
Oct  9 09:33:51 compute-0 ceph-mgr[4772]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:33:51 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'dashboard'
Oct  9 09:33:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:51.397+0000 7f4e8db26140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:33:51 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3444282531' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  9 09:33:51 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:33:51 compute-0 ceph-mgr[4772]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:33:51 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:33:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:51.945+0000 7f4e8db26140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:33:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:33:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:33:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  from numpy import show_config as show_numpy_config
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:33:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:52.089+0000 7f4e8db26140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'influx'
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:33:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:52.151+0000 7f4e8db26140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'insights'
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'iostat'
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:33:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:52.272+0000 7f4e8db26140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'localpool'
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mirroring'
Oct  9 09:33:52 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'nfs'
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:33:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:53.151+0000 7f4e8db26140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:53.339+0000 7f4e8db26140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_support'
Oct  9 09:33:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:53.405+0000 7f4e8db26140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:33:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:53.464+0000 7f4e8db26140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'progress'
Oct  9 09:33:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:53.533+0000 7f4e8db26140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'prometheus'
Oct  9 09:33:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:53.594+0000 7f4e8db26140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:53.893+0000 7f4e8db26140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:33:53 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'restful'
Oct  9 09:33:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:53.978+0000 7f4e8db26140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:33:54 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rgw'
Oct  9 09:33:54 compute-0 ceph-mgr[4772]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:33:54 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rook'
Oct  9 09:33:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:54.352+0000 7f4e8db26140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:33:54 compute-0 ceph-mgr[4772]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:33:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:54.830+0000 7f4e8db26140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:33:54 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'selftest'
Oct  9 09:33:54 compute-0 ceph-mgr[4772]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:33:54 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:33:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:54.893+0000 7f4e8db26140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:33:54 compute-0 ceph-mgr[4772]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:33:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:54.964+0000 7f4e8db26140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:33:54 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'stats'
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'status'
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telegraf'
Oct  9 09:33:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:55.094+0000 7f4e8db26140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telemetry'
Oct  9 09:33:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:55.159+0000 7f4e8db26140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:33:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:55.293+0000 7f4e8db26140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'volumes'
Oct  9 09:33:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:55.484+0000 7f4e8db26140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'zabbix'
Oct  9 09:33:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:55.717+0000 7f4e8db26140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:33:55.779+0000 7f4e8db26140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.lwqgfy
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: ms_deliver_dispatch: unhandled message 0x55e43f1aed00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr handle_mgr_map Activating!
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr handle_mgr_map I am now activating
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.lwqgfy(active, starting, since 0.00492417s)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"} v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e1 all = 1
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: balancer
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [balancer INFO root] Starting
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Manager daemon compute-0.lwqgfy is now available
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:33:55
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [balancer INFO root] No pools available
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: cephadm
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: crash
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: devicehealth
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: iostat
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Starting
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: nfs
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: orchestrator
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: pg_autoscaler
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: progress
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [progress INFO root] Loading...
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [progress INFO root] No stored events to load
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [progress INFO root] Loaded [] historic events
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [progress INFO root] Loaded OSDMap, ready.
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] recovery thread starting
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] starting setup
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: rbd_support
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: restful
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"} v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: status
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: telemetry
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [restful INFO root] server_addr: :: server_port: 8003
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [restful WARNING root] server not running: no certificate configured
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] PerfHandler: starting
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TaskHandler: starting
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"} v 0)
Oct  9 09:33:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] setup complete
Oct  9 09:33:55 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: volumes
Oct  9 09:33:55 compute-0 ceph-mon[4497]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:33:55 compute-0 ceph-mon[4497]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:33:55 compute-0 ceph-mon[4497]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:33:55 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:55 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:55 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:33:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019932593 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:33:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Oct  9 09:33:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Oct  9 09:33:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:56 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.lwqgfy(active, since 1.00736s)
Oct  9 09:33:56 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct  9 09:33:56 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct  9 09:33:56 compute-0 admiring_meninsky[5267]: {
Oct  9 09:33:56 compute-0 admiring_meninsky[5267]:    "mgrmap_epoch": 7,
Oct  9 09:33:56 compute-0 admiring_meninsky[5267]:    "initialized": true
Oct  9 09:33:56 compute-0 admiring_meninsky[5267]: }
Oct  9 09:33:56 compute-0 systemd[1]: libpod-ac7c0f3f05637558ec5d4ba454b8215dc5f746bd730b1da2cddf3e279f0794e1.scope: Deactivated successfully.
Oct  9 09:33:56 compute-0 podman[5251]: 2025-10-09 09:33:56.808211348 +0000 UTC m=+5.867867963 container died ac7c0f3f05637558ec5d4ba454b8215dc5f746bd730b1da2cddf3e279f0794e1 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 09:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-247ea902f350b4f25c2a697ec97110e12c4d182a160338d480239ebdeb8938b7-merged.mount: Deactivated successfully.
Oct  9 09:33:56 compute-0 podman[5251]: 2025-10-09 09:33:56.826219542 +0000 UTC m=+5.885876147 container remove ac7c0f3f05637558ec5d4ba454b8215dc5f746bd730b1da2cddf3e279f0794e1 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:33:56 compute-0 systemd[1]: libpod-conmon-ac7c0f3f05637558ec5d4ba454b8215dc5f746bd730b1da2cddf3e279f0794e1.scope: Deactivated successfully.
Oct  9 09:33:56 compute-0 podman[5424]: 2025-10-09 09:33:56.865079763 +0000 UTC m=+0.025557650 container create 2bac231c27011de2fa052461eab10799bd4350e5a955d286b6e3109427030717 (image=quay.io/ceph/ceph:v19, name=sad_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:33:56 compute-0 systemd[1]: Started libpod-conmon-2bac231c27011de2fa052461eab10799bd4350e5a955d286b6e3109427030717.scope.
Oct  9 09:33:56 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19155dc241245b56b87bd81f5cb9a41d74c56c3784b292ac259ffbcd5e94572/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19155dc241245b56b87bd81f5cb9a41d74c56c3784b292ac259ffbcd5e94572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e19155dc241245b56b87bd81f5cb9a41d74c56c3784b292ac259ffbcd5e94572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:56 compute-0 podman[5424]: 2025-10-09 09:33:56.91169219 +0000 UTC m=+0.072170097 container init 2bac231c27011de2fa052461eab10799bd4350e5a955d286b6e3109427030717 (image=quay.io/ceph/ceph:v19, name=sad_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 09:33:56 compute-0 podman[5424]: 2025-10-09 09:33:56.915914781 +0000 UTC m=+0.076392668 container start 2bac231c27011de2fa052461eab10799bd4350e5a955d286b6e3109427030717 (image=quay.io/ceph/ceph:v19, name=sad_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:33:56 compute-0 podman[5424]: 2025-10-09 09:33:56.91689519 +0000 UTC m=+0.077373077 container attach 2bac231c27011de2fa052461eab10799bd4350e5a955d286b6e3109427030717 (image=quay.io/ceph/ceph:v19, name=sad_brown, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:33:56 compute-0 podman[5424]: 2025-10-09 09:33:56.854494995 +0000 UTC m=+0.014972902 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:33:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Oct  9 09:33:57 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 09:33:57 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 09:33:57 compute-0 systemd[1]: libpod-2bac231c27011de2fa052461eab10799bd4350e5a955d286b6e3109427030717.scope: Deactivated successfully.
Oct  9 09:33:57 compute-0 podman[5424]: 2025-10-09 09:33:57.19888732 +0000 UTC m=+0.359365207 container died 2bac231c27011de2fa052461eab10799bd4350e5a955d286b6e3109427030717 (image=quay.io/ceph/ceph:v19, name=sad_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 09:33:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e19155dc241245b56b87bd81f5cb9a41d74c56c3784b292ac259ffbcd5e94572-merged.mount: Deactivated successfully.
Oct  9 09:33:57 compute-0 podman[5424]: 2025-10-09 09:33:57.217742772 +0000 UTC m=+0.378220659 container remove 2bac231c27011de2fa052461eab10799bd4350e5a955d286b6e3109427030717 (image=quay.io/ceph/ceph:v19, name=sad_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 09:33:57 compute-0 systemd[1]: libpod-conmon-2bac231c27011de2fa052461eab10799bd4350e5a955d286b6e3109427030717.scope: Deactivated successfully.
Oct  9 09:33:57 compute-0 podman[5473]: 2025-10-09 09:33:57.255261883 +0000 UTC m=+0.025591162 container create cb1ec5d4cb8b8e0e055b3e5141b11f9e51985f1588154604c2f03c110085c3bc (image=quay.io/ceph/ceph:v19, name=vibrant_noyce, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 09:33:57 compute-0 systemd[1]: Started libpod-conmon-cb1ec5d4cb8b8e0e055b3e5141b11f9e51985f1588154604c2f03c110085c3bc.scope.
Oct  9 09:33:57 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95174f3ddd6e79ec0677e07f30d2985b14eeeb0857d4c136c47fe01501e3fd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95174f3ddd6e79ec0677e07f30d2985b14eeeb0857d4c136c47fe01501e3fd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95174f3ddd6e79ec0677e07f30d2985b14eeeb0857d4c136c47fe01501e3fd7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:57 compute-0 podman[5473]: 2025-10-09 09:33:57.303474247 +0000 UTC m=+0.073803525 container init cb1ec5d4cb8b8e0e055b3e5141b11f9e51985f1588154604c2f03c110085c3bc (image=quay.io/ceph/ceph:v19, name=vibrant_noyce, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:33:57 compute-0 podman[5473]: 2025-10-09 09:33:57.308224643 +0000 UTC m=+0.078553922 container start cb1ec5d4cb8b8e0e055b3e5141b11f9e51985f1588154604c2f03c110085c3bc (image=quay.io/ceph/ceph:v19, name=vibrant_noyce, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:33:57 compute-0 podman[5473]: 2025-10-09 09:33:57.30917757 +0000 UTC m=+0.079506849 container attach cb1ec5d4cb8b8e0e055b3e5141b11f9e51985f1588154604c2f03c110085c3bc (image=quay.io/ceph/ceph:v19, name=vibrant_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:33:57 compute-0 podman[5473]: 2025-10-09 09:33:57.245431579 +0000 UTC m=+0.015760868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:57 compute-0 ceph-mon[4497]: Found migration_current of "None". Setting to last migration.
Oct  9 09:33:57 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:57 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:57 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:33:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Oct  9 09:33:57 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: [cephadm INFO root] Set ssh ssh_user
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct  9 09:33:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Oct  9 09:33:57 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: [cephadm INFO root] Set ssh ssh_config
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct  9 09:33:57 compute-0 vibrant_noyce[5486]: ssh user set to ceph-admin. sudo will be used
Oct  9 09:33:57 compute-0 systemd[1]: libpod-cb1ec5d4cb8b8e0e055b3e5141b11f9e51985f1588154604c2f03c110085c3bc.scope: Deactivated successfully.
Oct  9 09:33:57 compute-0 podman[5473]: 2025-10-09 09:33:57.583799554 +0000 UTC m=+0.354128832 container died cb1ec5d4cb8b8e0e055b3e5141b11f9e51985f1588154604c2f03c110085c3bc (image=quay.io/ceph/ceph:v19, name=vibrant_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  9 09:33:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c95174f3ddd6e79ec0677e07f30d2985b14eeeb0857d4c136c47fe01501e3fd7-merged.mount: Deactivated successfully.
Oct  9 09:33:57 compute-0 podman[5473]: 2025-10-09 09:33:57.600532624 +0000 UTC m=+0.370861902 container remove cb1ec5d4cb8b8e0e055b3e5141b11f9e51985f1588154604c2f03c110085c3bc (image=quay.io/ceph/ceph:v19, name=vibrant_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:57 compute-0 systemd[1]: libpod-conmon-cb1ec5d4cb8b8e0e055b3e5141b11f9e51985f1588154604c2f03c110085c3bc.scope: Deactivated successfully.
Oct  9 09:33:57 compute-0 podman[5521]: 2025-10-09 09:33:57.642338628 +0000 UTC m=+0.028384870 container create 8c1acf36235c6569805c86f6ac0fc4743d7c4a2dfade5bbfceb6bdc7a6c7d507 (image=quay.io/ceph/ceph:v19, name=laughing_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:57 compute-0 systemd[1]: Started libpod-conmon-8c1acf36235c6569805c86f6ac0fc4743d7c4a2dfade5bbfceb6bdc7a6c7d507.scope.
Oct  9 09:33:57 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c076d461c7939bf1c4879388b22a3b3282f61094a7580f6ae4ed99b15d42bcc4/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c076d461c7939bf1c4879388b22a3b3282f61094a7580f6ae4ed99b15d42bcc4/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c076d461c7939bf1c4879388b22a3b3282f61094a7580f6ae4ed99b15d42bcc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c076d461c7939bf1c4879388b22a3b3282f61094a7580f6ae4ed99b15d42bcc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c076d461c7939bf1c4879388b22a3b3282f61094a7580f6ae4ed99b15d42bcc4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:57 compute-0 podman[5521]: 2025-10-09 09:33:57.697794279 +0000 UTC m=+0.083840531 container init 8c1acf36235c6569805c86f6ac0fc4743d7c4a2dfade5bbfceb6bdc7a6c7d507 (image=quay.io/ceph/ceph:v19, name=laughing_keller, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:33:57 compute-0 podman[5521]: 2025-10-09 09:33:57.701609431 +0000 UTC m=+0.087655674 container start 8c1acf36235c6569805c86f6ac0fc4743d7c4a2dfade5bbfceb6bdc7a6c7d507 (image=quay.io/ceph/ceph:v19, name=laughing_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:33:57 compute-0 podman[5521]: 2025-10-09 09:33:57.702733482 +0000 UTC m=+0.088779723 container attach 8c1acf36235c6569805c86f6ac0fc4743d7c4a2dfade5bbfceb6bdc7a6c7d507 (image=quay.io/ceph/ceph:v19, name=laughing_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:33:57 compute-0 podman[5521]: 2025-10-09 09:33:57.631573821 +0000 UTC m=+0.017620073 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:33:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Oct  9 09:33:57 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: [cephadm INFO root] Set ssh ssh_identity_key
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: [cephadm INFO root] Set ssh private key
Oct  9 09:33:57 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Set ssh private key
Oct  9 09:33:57 compute-0 systemd[1]: libpod-8c1acf36235c6569805c86f6ac0fc4743d7c4a2dfade5bbfceb6bdc7a6c7d507.scope: Deactivated successfully.
Oct  9 09:33:57 compute-0 podman[5521]: 2025-10-09 09:33:57.975354211 +0000 UTC m=+0.361400454 container died 8c1acf36235c6569805c86f6ac0fc4743d7c4a2dfade5bbfceb6bdc7a6c7d507 (image=quay.io/ceph/ceph:v19, name=laughing_keller, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:33:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c076d461c7939bf1c4879388b22a3b3282f61094a7580f6ae4ed99b15d42bcc4-merged.mount: Deactivated successfully.
Oct  9 09:33:57 compute-0 podman[5521]: 2025-10-09 09:33:57.994882161 +0000 UTC m=+0.380928403 container remove 8c1acf36235c6569805c86f6ac0fc4743d7c4a2dfade5bbfceb6bdc7a6c7d507 (image=quay.io/ceph/ceph:v19, name=laughing_keller, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:33:58 compute-0 systemd[1]: libpod-conmon-8c1acf36235c6569805c86f6ac0fc4743d7c4a2dfade5bbfceb6bdc7a6c7d507.scope: Deactivated successfully.
Oct  9 09:33:58 compute-0 podman[5570]: 2025-10-09 09:33:58.035317622 +0000 UTC m=+0.026734909 container create ac9b4488185cc85d724628cf4062244916c9c55575a6a44d11a611f06d6af8f4 (image=quay.io/ceph/ceph:v19, name=pensive_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:33:58] ENGINE Bus STARTING
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:33:58] ENGINE Bus STARTING
Oct  9 09:33:58 compute-0 systemd[1]: Started libpod-conmon-ac9b4488185cc85d724628cf4062244916c9c55575a6a44d11a611f06d6af8f4.scope.
Oct  9 09:33:58 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4f121837db8014d00f497ef93fd460357e500b7ed1bfd6db0b1386d9e16f9ee/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4f121837db8014d00f497ef93fd460357e500b7ed1bfd6db0b1386d9e16f9ee/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4f121837db8014d00f497ef93fd460357e500b7ed1bfd6db0b1386d9e16f9ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4f121837db8014d00f497ef93fd460357e500b7ed1bfd6db0b1386d9e16f9ee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4f121837db8014d00f497ef93fd460357e500b7ed1bfd6db0b1386d9e16f9ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 podman[5570]: 2025-10-09 09:33:58.082087044 +0000 UTC m=+0.073504351 container init ac9b4488185cc85d724628cf4062244916c9c55575a6a44d11a611f06d6af8f4 (image=quay.io/ceph/ceph:v19, name=pensive_goldwasser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 09:33:58 compute-0 podman[5570]: 2025-10-09 09:33:58.086879872 +0000 UTC m=+0.078297159 container start ac9b4488185cc85d724628cf4062244916c9c55575a6a44d11a611f06d6af8f4 (image=quay.io/ceph/ceph:v19, name=pensive_goldwasser, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:33:58 compute-0 podman[5570]: 2025-10-09 09:33:58.087994713 +0000 UTC m=+0.079412000 container attach ac9b4488185cc85d724628cf4062244916c9c55575a6a44d11a611f06d6af8f4 (image=quay.io/ceph/ceph:v19, name=pensive_goldwasser, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 09:33:58 compute-0 podman[5570]: 2025-10-09 09:33:58.024983627 +0000 UTC m=+0.016400934 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:33:58] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:33:58] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:33:58] ENGINE Client ('192.168.122.100', 42880) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:33:58] ENGINE Client ('192.168.122.100', 42880) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:33:58] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:33:58] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:33:58] ENGINE Bus STARTED
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:33:58] ENGINE Bus STARTED
Oct  9 09:33:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 09:33:58 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:33:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Oct  9 09:33:58 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct  9 09:33:58 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:58 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:58 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:58 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:33:58 compute-0 podman[5570]: 2025-10-09 09:33:58.364335678 +0000 UTC m=+0.355752965 container died ac9b4488185cc85d724628cf4062244916c9c55575a6a44d11a611f06d6af8f4 (image=quay.io/ceph/ceph:v19, name=pensive_goldwasser, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 09:33:58 compute-0 systemd[1]: libpod-ac9b4488185cc85d724628cf4062244916c9c55575a6a44d11a611f06d6af8f4.scope: Deactivated successfully.
Oct  9 09:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4f121837db8014d00f497ef93fd460357e500b7ed1bfd6db0b1386d9e16f9ee-merged.mount: Deactivated successfully.
Oct  9 09:33:58 compute-0 podman[5570]: 2025-10-09 09:33:58.38431594 +0000 UTC m=+0.375733227 container remove ac9b4488185cc85d724628cf4062244916c9c55575a6a44d11a611f06d6af8f4 (image=quay.io/ceph/ceph:v19, name=pensive_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:33:58 compute-0 systemd[1]: libpod-conmon-ac9b4488185cc85d724628cf4062244916c9c55575a6a44d11a611f06d6af8f4.scope: Deactivated successfully.
Oct  9 09:33:58 compute-0 podman[5645]: 2025-10-09 09:33:58.427305607 +0000 UTC m=+0.029629838 container create 5a4f94ddcad9f45af9d8c7af959b6a897708bef13e37bd859ffe99ca44458f8e (image=quay.io/ceph/ceph:v19, name=friendly_poincare, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 09:33:58 compute-0 systemd[1]: Started libpod-conmon-5a4f94ddcad9f45af9d8c7af959b6a897708bef13e37bd859ffe99ca44458f8e.scope.
Oct  9 09:33:58 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c52ac129ca81f9a4345a6366eb831f732c9a6dc8aaff856fe6fc5716d68b43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c52ac129ca81f9a4345a6366eb831f732c9a6dc8aaff856fe6fc5716d68b43/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c52ac129ca81f9a4345a6366eb831f732c9a6dc8aaff856fe6fc5716d68b43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 podman[5645]: 2025-10-09 09:33:58.469539759 +0000 UTC m=+0.071863999 container init 5a4f94ddcad9f45af9d8c7af959b6a897708bef13e37bd859ffe99ca44458f8e (image=quay.io/ceph/ceph:v19, name=friendly_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 09:33:58 compute-0 podman[5645]: 2025-10-09 09:33:58.474422034 +0000 UTC m=+0.076746275 container start 5a4f94ddcad9f45af9d8c7af959b6a897708bef13e37bd859ffe99ca44458f8e (image=quay.io/ceph/ceph:v19, name=friendly_poincare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:33:58 compute-0 podman[5645]: 2025-10-09 09:33:58.475628469 +0000 UTC m=+0.077952710 container attach 5a4f94ddcad9f45af9d8c7af959b6a897708bef13e37bd859ffe99ca44458f8e (image=quay.io/ceph/ceph:v19, name=friendly_poincare, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:58 compute-0 podman[5645]: 2025-10-09 09:33:58.41607987 +0000 UTC m=+0.018404132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:58 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.lwqgfy(active, since 2s)
Oct  9 09:33:58 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:33:58 compute-0 friendly_poincare[5658]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSn4QZAYY3SfnbUsFTmpqPwqGDcFv+IdjMwOcvUs0V8DgbwTQwXwO/qOrCzZwk1WHFfs4zUEi2UdIZBAZuQSxck/eMjKLdYfrr+7PMmgEvgc2OfXeBQqPuF1t5GQGcjghSG2TUCnB2GVfJ6R9hqXn0ChxCytpsS1//1J/Eo8n3GmWf1+RLsFpdjhO7Qt0AyxPS6wAkmeyyPpIwBbcWSP4dRMSDEflBOnFxUeaiZl1lttpXB/gAMxm1FUNIsActR/kSQgx1YNGEppsJokVJ4mPe2XLrlDyXJnIMMJsZWr4ouKCIWenGJVCnPf8Af7VAz/EONt1v3Ux5Xs3cMhMqrtedD4qQzb4NNUmBiG4OmJf5QDaeFuyjq86m10DmV6aoCSjKtsbzL8SQGksQS0m57DMSI67Er7As5gz826RfPqGIbY8fyhHDWHWzUN7mebPo322ytLqUDTW31rS+7m31njfxZalEibDjN9Q0owAAJHagZ8eyoO4LbvaRz/k2Utnepb8= zuul@controller
Oct  9 09:33:58 compute-0 systemd[1]: libpod-5a4f94ddcad9f45af9d8c7af959b6a897708bef13e37bd859ffe99ca44458f8e.scope: Deactivated successfully.
Oct  9 09:33:58 compute-0 conmon[5658]: conmon 5a4f94ddcad9f45af9d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a4f94ddcad9f45af9d8c7af959b6a897708bef13e37bd859ffe99ca44458f8e.scope/container/memory.events
Oct  9 09:33:58 compute-0 podman[5645]: 2025-10-09 09:33:58.744726076 +0000 UTC m=+0.347050327 container died 5a4f94ddcad9f45af9d8c7af959b6a897708bef13e37bd859ffe99ca44458f8e (image=quay.io/ceph/ceph:v19, name=friendly_poincare, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:58 compute-0 podman[5645]: 2025-10-09 09:33:58.762947673 +0000 UTC m=+0.365271914 container remove 5a4f94ddcad9f45af9d8c7af959b6a897708bef13e37bd859ffe99ca44458f8e (image=quay.io/ceph/ceph:v19, name=friendly_poincare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:58 compute-0 systemd[1]: libpod-conmon-5a4f94ddcad9f45af9d8c7af959b6a897708bef13e37bd859ffe99ca44458f8e.scope: Deactivated successfully.
Oct  9 09:33:58 compute-0 podman[5694]: 2025-10-09 09:33:58.804380363 +0000 UTC m=+0.026719439 container create 4f40df1c046521a8d6b98313f1501b2faa2052270b29c5366e7e1c692a5f331c (image=quay.io/ceph/ceph:v19, name=goofy_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:33:58 compute-0 systemd[1]: Started libpod-conmon-4f40df1c046521a8d6b98313f1501b2faa2052270b29c5366e7e1c692a5f331c.scope.
Oct  9 09:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c52ac129ca81f9a4345a6366eb831f732c9a6dc8aaff856fe6fc5716d68b43-merged.mount: Deactivated successfully.
Oct  9 09:33:58 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c851e57356ff2c6215b7268cec23aec8287abdf3854fc4d372e642ee41b357e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c851e57356ff2c6215b7268cec23aec8287abdf3854fc4d372e642ee41b357e4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c851e57356ff2c6215b7268cec23aec8287abdf3854fc4d372e642ee41b357e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:33:58 compute-0 podman[5694]: 2025-10-09 09:33:58.856119636 +0000 UTC m=+0.078458722 container init 4f40df1c046521a8d6b98313f1501b2faa2052270b29c5366e7e1c692a5f331c (image=quay.io/ceph/ceph:v19, name=goofy_jones, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:33:58 compute-0 podman[5694]: 2025-10-09 09:33:58.860736472 +0000 UTC m=+0.083075547 container start 4f40df1c046521a8d6b98313f1501b2faa2052270b29c5366e7e1c692a5f331c (image=quay.io/ceph/ceph:v19, name=goofy_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:33:58 compute-0 podman[5694]: 2025-10-09 09:33:58.862036062 +0000 UTC m=+0.084375138 container attach 4f40df1c046521a8d6b98313f1501b2faa2052270b29c5366e7e1c692a5f331c (image=quay.io/ceph/ceph:v19, name=goofy_jones, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:33:58 compute-0 podman[5694]: 2025-10-09 09:33:58.793673947 +0000 UTC m=+0.016013033 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:33:59 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:33:59 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct  9 09:33:59 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  9 09:33:59 compute-0 systemd-logind[798]: New session 6 of user ceph-admin.
Oct  9 09:33:59 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  9 09:33:59 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct  9 09:33:59 compute-0 ceph-mon[4497]: Set ssh ssh_user
Oct  9 09:33:59 compute-0 ceph-mon[4497]: Set ssh ssh_config
Oct  9 09:33:59 compute-0 ceph-mon[4497]: ssh user set to ceph-admin. sudo will be used
Oct  9 09:33:59 compute-0 ceph-mon[4497]: Set ssh ssh_identity_key
Oct  9 09:33:59 compute-0 ceph-mon[4497]: Set ssh private key
Oct  9 09:33:59 compute-0 ceph-mon[4497]: [09/Oct/2025:09:33:58] ENGINE Bus STARTING
Oct  9 09:33:59 compute-0 ceph-mon[4497]: [09/Oct/2025:09:33:58] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:33:59 compute-0 ceph-mon[4497]: [09/Oct/2025:09:33:58] ENGINE Client ('192.168.122.100', 42880) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:33:59 compute-0 ceph-mon[4497]: [09/Oct/2025:09:33:58] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:33:59 compute-0 ceph-mon[4497]: [09/Oct/2025:09:33:58] ENGINE Bus STARTED
Oct  9 09:33:59 compute-0 ceph-mon[4497]: Set ssh ssh_identity_pub
Oct  9 09:33:59 compute-0 systemd[5737]: Queued start job for default target Main User Target.
Oct  9 09:33:59 compute-0 systemd[5737]: Created slice User Application Slice.
Oct  9 09:33:59 compute-0 systemd[5737]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:33:59 compute-0 systemd[5737]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:33:59 compute-0 systemd[5737]: Reached target Paths.
Oct  9 09:33:59 compute-0 systemd[5737]: Reached target Timers.
Oct  9 09:33:59 compute-0 systemd[5737]: Starting D-Bus User Message Bus Socket...
Oct  9 09:33:59 compute-0 systemd[5737]: Starting Create User's Volatile Files and Directories...
Oct  9 09:33:59 compute-0 systemd[5737]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:33:59 compute-0 systemd[5737]: Reached target Sockets.
Oct  9 09:33:59 compute-0 systemd[5737]: Finished Create User's Volatile Files and Directories.
Oct  9 09:33:59 compute-0 systemd[5737]: Reached target Basic System.
Oct  9 09:33:59 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct  9 09:33:59 compute-0 systemd[5737]: Reached target Main User Target.
Oct  9 09:33:59 compute-0 systemd[5737]: Startup finished in 85ms.
Oct  9 09:33:59 compute-0 systemd[1]: Started Session 6 of User ceph-admin.
Oct  9 09:33:59 compute-0 systemd-logind[798]: New session 8 of user ceph-admin.
Oct  9 09:33:59 compute-0 systemd[1]: Started Session 8 of User ceph-admin.
Oct  9 09:33:59 compute-0 systemd-logind[798]: New session 9 of user ceph-admin.
Oct  9 09:33:59 compute-0 systemd[1]: Started Session 9 of User ceph-admin.
Oct  9 09:33:59 compute-0 ceph-mgr[4772]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 09:34:00 compute-0 systemd-logind[798]: New session 10 of user ceph-admin.
Oct  9 09:34:00 compute-0 systemd[1]: Started Session 10 of User ceph-admin.
Oct  9 09:34:00 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct  9 09:34:00 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct  9 09:34:00 compute-0 systemd-logind[798]: New session 11 of user ceph-admin.
Oct  9 09:34:00 compute-0 systemd[1]: Started Session 11 of User ceph-admin.
Oct  9 09:34:00 compute-0 systemd-logind[798]: New session 12 of user ceph-admin.
Oct  9 09:34:00 compute-0 systemd[1]: Started Session 12 of User ceph-admin.
Oct  9 09:34:00 compute-0 systemd-logind[798]: New session 13 of user ceph-admin.
Oct  9 09:34:00 compute-0 systemd[1]: Started Session 13 of User ceph-admin.
Oct  9 09:34:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053161 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:01 compute-0 systemd-logind[798]: New session 14 of user ceph-admin.
Oct  9 09:34:01 compute-0 systemd[1]: Started Session 14 of User ceph-admin.
Oct  9 09:34:01 compute-0 systemd-logind[798]: New session 15 of user ceph-admin.
Oct  9 09:34:01 compute-0 systemd[1]: Started Session 15 of User ceph-admin.
Oct  9 09:34:01 compute-0 ceph-mon[4497]: Deploying cephadm binary to compute-0
Oct  9 09:34:01 compute-0 systemd-logind[798]: New session 16 of user ceph-admin.
Oct  9 09:34:01 compute-0 systemd[1]: Started Session 16 of User ceph-admin.
Oct  9 09:34:01 compute-0 ceph-mgr[4772]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 09:34:02 compute-0 systemd-logind[798]: New session 17 of user ceph-admin.
Oct  9 09:34:02 compute-0 systemd[1]: Started Session 17 of User ceph-admin.
Oct  9 09:34:02 compute-0 systemd-logind[798]: New session 18 of user ceph-admin.
Oct  9 09:34:02 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Oct  9 09:34:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 09:34:02 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:02 compute-0 ceph-mgr[4772]: [cephadm INFO root] Added host compute-0
Oct  9 09:34:02 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  9 09:34:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 09:34:02 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 09:34:03 compute-0 goofy_jones[5707]: Added host 'compute-0' with addr '192.168.122.100'
Oct  9 09:34:03 compute-0 systemd[1]: libpod-4f40df1c046521a8d6b98313f1501b2faa2052270b29c5366e7e1c692a5f331c.scope: Deactivated successfully.
Oct  9 09:34:03 compute-0 podman[5694]: 2025-10-09 09:34:03.014189451 +0000 UTC m=+4.236528537 container died 4f40df1c046521a8d6b98313f1501b2faa2052270b29c5366e7e1c692a5f331c (image=quay.io/ceph/ceph:v19, name=goofy_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c851e57356ff2c6215b7268cec23aec8287abdf3854fc4d372e642ee41b357e4-merged.mount: Deactivated successfully.
Oct  9 09:34:03 compute-0 podman[5694]: 2025-10-09 09:34:03.037464365 +0000 UTC m=+4.259803441 container remove 4f40df1c046521a8d6b98313f1501b2faa2052270b29c5366e7e1c692a5f331c (image=quay.io/ceph/ceph:v19, name=goofy_jones, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 09:34:03 compute-0 systemd[1]: libpod-conmon-4f40df1c046521a8d6b98313f1501b2faa2052270b29c5366e7e1c692a5f331c.scope: Deactivated successfully.
Oct  9 09:34:03 compute-0 podman[6124]: 2025-10-09 09:34:03.079127139 +0000 UTC m=+0.026138665 container create c10df71d8708abb9c048a45cd45e94a106432e3c56cc5479e64a8dd7fa054a1f (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:03 compute-0 systemd[1]: Started libpod-conmon-c10df71d8708abb9c048a45cd45e94a106432e3c56cc5479e64a8dd7fa054a1f.scope.
Oct  9 09:34:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa392faa5a51b81cbf82be7205ccfe4dfd31f36b03379b8c84555e7c96df8965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa392faa5a51b81cbf82be7205ccfe4dfd31f36b03379b8c84555e7c96df8965/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa392faa5a51b81cbf82be7205ccfe4dfd31f36b03379b8c84555e7c96df8965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:03 compute-0 podman[6124]: 2025-10-09 09:34:03.134808014 +0000 UTC m=+0.081819561 container init c10df71d8708abb9c048a45cd45e94a106432e3c56cc5479e64a8dd7fa054a1f (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:03 compute-0 podman[6124]: 2025-10-09 09:34:03.139486976 +0000 UTC m=+0.086498502 container start c10df71d8708abb9c048a45cd45e94a106432e3c56cc5479e64a8dd7fa054a1f (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:34:03 compute-0 podman[6124]: 2025-10-09 09:34:03.140729699 +0000 UTC m=+0.087741225 container attach c10df71d8708abb9c048a45cd45e94a106432e3c56cc5479e64a8dd7fa054a1f (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:34:03 compute-0 podman[6124]: 2025-10-09 09:34:03.06888649 +0000 UTC m=+0.015898036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:03 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:34:03 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct  9 09:34:03 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct  9 09:34:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 09:34:03 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:03 compute-0 intelligent_mclean[6164]: Scheduled mon update...
Oct  9 09:34:03 compute-0 systemd[1]: libpod-c10df71d8708abb9c048a45cd45e94a106432e3c56cc5479e64a8dd7fa054a1f.scope: Deactivated successfully.
Oct  9 09:34:03 compute-0 podman[6124]: 2025-10-09 09:34:03.423228264 +0000 UTC m=+0.370239790 container died c10df71d8708abb9c048a45cd45e94a106432e3c56cc5479e64a8dd7fa054a1f (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa392faa5a51b81cbf82be7205ccfe4dfd31f36b03379b8c84555e7c96df8965-merged.mount: Deactivated successfully.
Oct  9 09:34:03 compute-0 podman[6124]: 2025-10-09 09:34:03.442608426 +0000 UTC m=+0.389619952 container remove c10df71d8708abb9c048a45cd45e94a106432e3c56cc5479e64a8dd7fa054a1f (image=quay.io/ceph/ceph:v19, name=intelligent_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:34:03 compute-0 systemd[1]: libpod-conmon-c10df71d8708abb9c048a45cd45e94a106432e3c56cc5479e64a8dd7fa054a1f.scope: Deactivated successfully.
Oct  9 09:34:03 compute-0 podman[6222]: 2025-10-09 09:34:03.481891282 +0000 UTC m=+0.025984594 container create cc73b132d36ca431905fe1c1c29cf7abfeb9541edeaefd49b37ac3e6efb8ba36 (image=quay.io/ceph/ceph:v19, name=mystifying_moore, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 09:34:03 compute-0 systemd[1]: Started libpod-conmon-cc73b132d36ca431905fe1c1c29cf7abfeb9541edeaefd49b37ac3e6efb8ba36.scope.
Oct  9 09:34:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b370e8af405b5cc995904d89cb0425a03d977daa2f7e6acf76cf0f5f8672575/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b370e8af405b5cc995904d89cb0425a03d977daa2f7e6acf76cf0f5f8672575/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b370e8af405b5cc995904d89cb0425a03d977daa2f7e6acf76cf0f5f8672575/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:03 compute-0 podman[6222]: 2025-10-09 09:34:03.536044296 +0000 UTC m=+0.080137618 container init cc73b132d36ca431905fe1c1c29cf7abfeb9541edeaefd49b37ac3e6efb8ba36 (image=quay.io/ceph/ceph:v19, name=mystifying_moore, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 09:34:03 compute-0 podman[6222]: 2025-10-09 09:34:03.54123328 +0000 UTC m=+0.085326602 container start cc73b132d36ca431905fe1c1c29cf7abfeb9541edeaefd49b37ac3e6efb8ba36 (image=quay.io/ceph/ceph:v19, name=mystifying_moore, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:34:03 compute-0 podman[6222]: 2025-10-09 09:34:03.542313096 +0000 UTC m=+0.086406418 container attach cc73b132d36ca431905fe1c1c29cf7abfeb9541edeaefd49b37ac3e6efb8ba36 (image=quay.io/ceph/ceph:v19, name=mystifying_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:03 compute-0 podman[6222]: 2025-10-09 09:34:03.470948439 +0000 UTC m=+0.015041771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:03 compute-0 ceph-mgr[4772]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 09:34:03 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:34:03 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct  9 09:34:03 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct  9 09:34:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 09:34:03 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:03 compute-0 mystifying_moore[6235]: Scheduled mgr update...
Oct  9 09:34:03 compute-0 systemd[1]: libpod-cc73b132d36ca431905fe1c1c29cf7abfeb9541edeaefd49b37ac3e6efb8ba36.scope: Deactivated successfully.
Oct  9 09:34:03 compute-0 conmon[6235]: conmon cc73b132d36ca431905f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc73b132d36ca431905fe1c1c29cf7abfeb9541edeaefd49b37ac3e6efb8ba36.scope/container/memory.events
Oct  9 09:34:03 compute-0 podman[6222]: 2025-10-09 09:34:03.823317825 +0000 UTC m=+0.367411147 container died cc73b132d36ca431905fe1c1c29cf7abfeb9541edeaefd49b37ac3e6efb8ba36 (image=quay.io/ceph/ceph:v19, name=mystifying_moore, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 09:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b370e8af405b5cc995904d89cb0425a03d977daa2f7e6acf76cf0f5f8672575-merged.mount: Deactivated successfully.
Oct  9 09:34:03 compute-0 podman[6222]: 2025-10-09 09:34:03.845577665 +0000 UTC m=+0.389670987 container remove cc73b132d36ca431905fe1c1c29cf7abfeb9541edeaefd49b37ac3e6efb8ba36 (image=quay.io/ceph/ceph:v19, name=mystifying_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:03 compute-0 podman[6199]: 2025-10-09 09:34:03.84901691 +0000 UTC m=+0.559676580 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:03 compute-0 systemd[1]: libpod-conmon-cc73b132d36ca431905fe1c1c29cf7abfeb9541edeaefd49b37ac3e6efb8ba36.scope: Deactivated successfully.
Oct  9 09:34:03 compute-0 podman[6270]: 2025-10-09 09:34:03.886055085 +0000 UTC m=+0.026446986 container create caefe4ac35b62b85c8e1d15e08b50e637ee03500f0e0fbed8944af0fa6c851da (image=quay.io/ceph/ceph:v19, name=stoic_rosalind, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:34:03 compute-0 systemd[1]: Started libpod-conmon-caefe4ac35b62b85c8e1d15e08b50e637ee03500f0e0fbed8944af0fa6c851da.scope.
Oct  9 09:34:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e2e389583b477f81af8285a732e8864bfcf12cf80c1f4857174209d4f41e6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e2e389583b477f81af8285a732e8864bfcf12cf80c1f4857174209d4f41e6c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e2e389583b477f81af8285a732e8864bfcf12cf80c1f4857174209d4f41e6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:03 compute-0 podman[6270]: 2025-10-09 09:34:03.928554226 +0000 UTC m=+0.068946148 container init caefe4ac35b62b85c8e1d15e08b50e637ee03500f0e0fbed8944af0fa6c851da (image=quay.io/ceph/ceph:v19, name=stoic_rosalind, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:34:03 compute-0 podman[6270]: 2025-10-09 09:34:03.933641778 +0000 UTC m=+0.074033670 container start caefe4ac35b62b85c8e1d15e08b50e637ee03500f0e0fbed8944af0fa6c851da (image=quay.io/ceph/ceph:v19, name=stoic_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 09:34:03 compute-0 podman[6292]: 2025-10-09 09:34:03.934009251 +0000 UTC m=+0.034205084 container create d7c0b2b1a36c454312be6670601a29fd54391405910ada242c8b78b79baf5b94 (image=quay.io/ceph/ceph:v19, name=quirky_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 09:34:03 compute-0 podman[6270]: 2025-10-09 09:34:03.935776693 +0000 UTC m=+0.076168596 container attach caefe4ac35b62b85c8e1d15e08b50e637ee03500f0e0fbed8944af0fa6c851da (image=quay.io/ceph/ceph:v19, name=stoic_rosalind, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:03 compute-0 systemd[1]: Started libpod-conmon-d7c0b2b1a36c454312be6670601a29fd54391405910ada242c8b78b79baf5b94.scope.
Oct  9 09:34:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:03 compute-0 podman[6270]: 2025-10-09 09:34:03.875229684 +0000 UTC m=+0.015621596 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:03 compute-0 podman[6292]: 2025-10-09 09:34:03.975915574 +0000 UTC m=+0.076111417 container init d7c0b2b1a36c454312be6670601a29fd54391405910ada242c8b78b79baf5b94 (image=quay.io/ceph/ceph:v19, name=quirky_brattain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:03 compute-0 podman[6292]: 2025-10-09 09:34:03.980167881 +0000 UTC m=+0.080363705 container start d7c0b2b1a36c454312be6670601a29fd54391405910ada242c8b78b79baf5b94 (image=quay.io/ceph/ceph:v19, name=quirky_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:34:03 compute-0 podman[6292]: 2025-10-09 09:34:03.981393373 +0000 UTC m=+0.081589196 container attach d7c0b2b1a36c454312be6670601a29fd54391405910ada242c8b78b79baf5b94 (image=quay.io/ceph/ceph:v19, name=quirky_brattain, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 09:34:03 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:03 compute-0 ceph-mon[4497]: Added host compute-0
Oct  9 09:34:03 compute-0 ceph-mon[4497]: Saving service mon spec with placement count:5
Oct  9 09:34:03 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:03 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:04 compute-0 podman[6292]: 2025-10-09 09:34:03.919122663 +0000 UTC m=+0.019318496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:04 compute-0 quirky_brattain[6314]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Oct  9 09:34:04 compute-0 systemd[1]: libpod-d7c0b2b1a36c454312be6670601a29fd54391405910ada242c8b78b79baf5b94.scope: Deactivated successfully.
Oct  9 09:34:04 compute-0 podman[6338]: 2025-10-09 09:34:04.088910364 +0000 UTC m=+0.016580172 container died d7c0b2b1a36c454312be6670601a29fd54391405910ada242c8b78b79baf5b94 (image=quay.io/ceph/ceph:v19, name=quirky_brattain, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-028c437c27e82b54b13fb26d930f257ed49b9f019f0327ec5b8d78ad3ecce551-merged.mount: Deactivated successfully.
Oct  9 09:34:04 compute-0 podman[6338]: 2025-10-09 09:34:04.105909385 +0000 UTC m=+0.033579184 container remove d7c0b2b1a36c454312be6670601a29fd54391405910ada242c8b78b79baf5b94 (image=quay.io/ceph/ceph:v19, name=quirky_brattain, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 09:34:04 compute-0 systemd[1]: libpod-conmon-d7c0b2b1a36c454312be6670601a29fd54391405910ada242c8b78b79baf5b94.scope: Deactivated successfully.
Oct  9 09:34:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Oct  9 09:34:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:04 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:34:04 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service crash spec with placement *
Oct  9 09:34:04 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct  9 09:34:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 09:34:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:04 compute-0 stoic_rosalind[6303]: Scheduled crash update...
Oct  9 09:34:04 compute-0 podman[6270]: 2025-10-09 09:34:04.234446018 +0000 UTC m=+0.374837920 container died caefe4ac35b62b85c8e1d15e08b50e637ee03500f0e0fbed8944af0fa6c851da (image=quay.io/ceph/ceph:v19, name=stoic_rosalind, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:04 compute-0 systemd[1]: libpod-caefe4ac35b62b85c8e1d15e08b50e637ee03500f0e0fbed8944af0fa6c851da.scope: Deactivated successfully.
Oct  9 09:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3e2e389583b477f81af8285a732e8864bfcf12cf80c1f4857174209d4f41e6c-merged.mount: Deactivated successfully.
Oct  9 09:34:04 compute-0 podman[6270]: 2025-10-09 09:34:04.259068984 +0000 UTC m=+0.399460886 container remove caefe4ac35b62b85c8e1d15e08b50e637ee03500f0e0fbed8944af0fa6c851da (image=quay.io/ceph/ceph:v19, name=stoic_rosalind, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:04 compute-0 systemd[1]: libpod-conmon-caefe4ac35b62b85c8e1d15e08b50e637ee03500f0e0fbed8944af0fa6c851da.scope: Deactivated successfully.
Oct  9 09:34:04 compute-0 podman[6411]: 2025-10-09 09:34:04.3024346 +0000 UTC m=+0.026959341 container create 4bf6f625d43e0a3b3b489ed8f4273585c09518e106b3d805ecee682f3374f2a5 (image=quay.io/ceph/ceph:v19, name=interesting_pascal, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:04 compute-0 systemd[1]: Started libpod-conmon-4bf6f625d43e0a3b3b489ed8f4273585c09518e106b3d805ecee682f3374f2a5.scope.
Oct  9 09:34:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f77ded0a5802d926530b1870ec81a52cfc95d339f69314f4d81e86a7d93927/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f77ded0a5802d926530b1870ec81a52cfc95d339f69314f4d81e86a7d93927/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f77ded0a5802d926530b1870ec81a52cfc95d339f69314f4d81e86a7d93927/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:04 compute-0 podman[6411]: 2025-10-09 09:34:04.357981182 +0000 UTC m=+0.082505944 container init 4bf6f625d43e0a3b3b489ed8f4273585c09518e106b3d805ecee682f3374f2a5 (image=quay.io/ceph/ceph:v19, name=interesting_pascal, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:04 compute-0 podman[6411]: 2025-10-09 09:34:04.363211203 +0000 UTC m=+0.087735945 container start 4bf6f625d43e0a3b3b489ed8f4273585c09518e106b3d805ecee682f3374f2a5 (image=quay.io/ceph/ceph:v19, name=interesting_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:34:04 compute-0 podman[6411]: 2025-10-09 09:34:04.364467982 +0000 UTC m=+0.088992724 container attach 4bf6f625d43e0a3b3b489ed8f4273585c09518e106b3d805ecee682f3374f2a5 (image=quay.io/ceph/ceph:v19, name=interesting_pascal, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 09:34:04 compute-0 podman[6411]: 2025-10-09 09:34:04.290962759 +0000 UTC m=+0.015487522 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Oct  9 09:34:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3441111421' entity='client.admin' 
Oct  9 09:34:04 compute-0 systemd[1]: libpod-4bf6f625d43e0a3b3b489ed8f4273585c09518e106b3d805ecee682f3374f2a5.scope: Deactivated successfully.
Oct  9 09:34:04 compute-0 podman[6411]: 2025-10-09 09:34:04.643278073 +0000 UTC m=+0.367802825 container died 4bf6f625d43e0a3b3b489ed8f4273585c09518e106b3d805ecee682f3374f2a5 (image=quay.io/ceph/ceph:v19, name=interesting_pascal, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 09:34:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4f77ded0a5802d926530b1870ec81a52cfc95d339f69314f4d81e86a7d93927-merged.mount: Deactivated successfully.
Oct  9 09:34:04 compute-0 podman[6411]: 2025-10-09 09:34:04.661429647 +0000 UTC m=+0.385954389 container remove 4bf6f625d43e0a3b3b489ed8f4273585c09518e106b3d805ecee682f3374f2a5 (image=quay.io/ceph/ceph:v19, name=interesting_pascal, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:04 compute-0 systemd[1]: libpod-conmon-4bf6f625d43e0a3b3b489ed8f4273585c09518e106b3d805ecee682f3374f2a5.scope: Deactivated successfully.
Oct  9 09:34:04 compute-0 podman[6530]: 2025-10-09 09:34:04.704674274 +0000 UTC m=+0.026010583 container create 42779177869d92f0e5abeb296b8978a8de571ecf034582b8f4c9c6799d9f1eed (image=quay.io/ceph/ceph:v19, name=unruffled_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:04 compute-0 systemd[1]: Started libpod-conmon-42779177869d92f0e5abeb296b8978a8de571ecf034582b8f4c9c6799d9f1eed.scope.
Oct  9 09:34:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e65ef1969b13fe17bf8e816e841240b6c9925ae0497ff0e50b4116ae4bb29b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e65ef1969b13fe17bf8e816e841240b6c9925ae0497ff0e50b4116ae4bb29b1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e65ef1969b13fe17bf8e816e841240b6c9925ae0497ff0e50b4116ae4bb29b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:04 compute-0 podman[6530]: 2025-10-09 09:34:04.748763282 +0000 UTC m=+0.070099612 container init 42779177869d92f0e5abeb296b8978a8de571ecf034582b8f4c9c6799d9f1eed (image=quay.io/ceph/ceph:v19, name=unruffled_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:04 compute-0 podman[6530]: 2025-10-09 09:34:04.753123624 +0000 UTC m=+0.074459923 container start 42779177869d92f0e5abeb296b8978a8de571ecf034582b8f4c9c6799d9f1eed (image=quay.io/ceph/ceph:v19, name=unruffled_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:04 compute-0 podman[6530]: 2025-10-09 09:34:04.754243735 +0000 UTC m=+0.075580035 container attach 42779177869d92f0e5abeb296b8978a8de571ecf034582b8f4c9c6799d9f1eed (image=quay.io/ceph/ceph:v19, name=unruffled_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:04 compute-0 podman[6530]: 2025-10-09 09:34:04.694288742 +0000 UTC m=+0.015625061 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:04 compute-0 podman[6623]: 2025-10-09 09:34:04.934722786 +0000 UTC m=+0.034125625 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 09:34:05 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:34:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Oct  9 09:34:05 compute-0 podman[6623]: 2025-10-09 09:34:05.017337995 +0000 UTC m=+0.116740834 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:34:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:05 compute-0 systemd[1]: libpod-42779177869d92f0e5abeb296b8978a8de571ecf034582b8f4c9c6799d9f1eed.scope: Deactivated successfully.
Oct  9 09:34:05 compute-0 podman[6530]: 2025-10-09 09:34:05.033256438 +0000 UTC m=+0.354592738 container died 42779177869d92f0e5abeb296b8978a8de571ecf034582b8f4c9c6799d9f1eed (image=quay.io/ceph/ceph:v19, name=unruffled_wilson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 09:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e65ef1969b13fe17bf8e816e841240b6c9925ae0497ff0e50b4116ae4bb29b1-merged.mount: Deactivated successfully.
Oct  9 09:34:05 compute-0 podman[6530]: 2025-10-09 09:34:05.066689667 +0000 UTC m=+0.388025965 container remove 42779177869d92f0e5abeb296b8978a8de571ecf034582b8f4c9c6799d9f1eed (image=quay.io/ceph/ceph:v19, name=unruffled_wilson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:05 compute-0 systemd[1]: libpod-conmon-42779177869d92f0e5abeb296b8978a8de571ecf034582b8f4c9c6799d9f1eed.scope: Deactivated successfully.
Oct  9 09:34:05 compute-0 podman[6672]: 2025-10-09 09:34:05.110923519 +0000 UTC m=+0.030069527 container create 2eaaf0e8b0c51487c9aa152e6ce52815e4863a24f3b71d44f720d5564edc8d00 (image=quay.io/ceph/ceph:v19, name=hardcore_ishizaka, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 09:34:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:05 compute-0 systemd[1]: Started libpod-conmon-2eaaf0e8b0c51487c9aa152e6ce52815e4863a24f3b71d44f720d5564edc8d00.scope.
Oct  9 09:34:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:05 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:05 compute-0 ceph-mon[4497]: Saving service mgr spec with placement count:2
Oct  9 09:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be178b434bba6cab7a592baa359ef0c4c61eccb8103f546b49d812cf5011b350/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be178b434bba6cab7a592baa359ef0c4c61eccb8103f546b49d812cf5011b350/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be178b434bba6cab7a592baa359ef0c4c61eccb8103f546b49d812cf5011b350/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:05 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:05 compute-0 ceph-mon[4497]: Saving service crash spec with placement *
Oct  9 09:34:05 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:05 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:05 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3441111421' entity='client.admin' 
Oct  9 09:34:05 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:05 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:05 compute-0 podman[6672]: 2025-10-09 09:34:05.15078165 +0000 UTC m=+0.069927668 container init 2eaaf0e8b0c51487c9aa152e6ce52815e4863a24f3b71d44f720d5564edc8d00 (image=quay.io/ceph/ceph:v19, name=hardcore_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 09:34:05 compute-0 podman[6672]: 2025-10-09 09:34:05.155418643 +0000 UTC m=+0.074564650 container start 2eaaf0e8b0c51487c9aa152e6ce52815e4863a24f3b71d44f720d5564edc8d00 (image=quay.io/ceph/ceph:v19, name=hardcore_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Oct  9 09:34:05 compute-0 podman[6672]: 2025-10-09 09:34:05.156381659 +0000 UTC m=+0.075527666 container attach 2eaaf0e8b0c51487c9aa152e6ce52815e4863a24f3b71d44f720d5564edc8d00 (image=quay.io/ceph/ceph:v19, name=hardcore_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:34:05 compute-0 podman[6672]: 2025-10-09 09:34:05.100415776 +0000 UTC m=+0.019561804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:05 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 6773 (sysctl)
Oct  9 09:34:05 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  9 09:34:05 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  9 09:34:05 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:34:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 09:34:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:05 compute-0 ceph-mgr[4772]: [cephadm INFO root] Added label _admin to host compute-0
Oct  9 09:34:05 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct  9 09:34:05 compute-0 hardcore_ishizaka[6688]: Added label _admin to host compute-0
Oct  9 09:34:05 compute-0 systemd[1]: libpod-2eaaf0e8b0c51487c9aa152e6ce52815e4863a24f3b71d44f720d5564edc8d00.scope: Deactivated successfully.
Oct  9 09:34:05 compute-0 podman[6672]: 2025-10-09 09:34:05.437227697 +0000 UTC m=+0.356373706 container died 2eaaf0e8b0c51487c9aa152e6ce52815e4863a24f3b71d44f720d5564edc8d00 (image=quay.io/ceph/ceph:v19, name=hardcore_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-be178b434bba6cab7a592baa359ef0c4c61eccb8103f546b49d812cf5011b350-merged.mount: Deactivated successfully.
Oct  9 09:34:05 compute-0 podman[6672]: 2025-10-09 09:34:05.458933334 +0000 UTC m=+0.378079342 container remove 2eaaf0e8b0c51487c9aa152e6ce52815e4863a24f3b71d44f720d5564edc8d00 (image=quay.io/ceph/ceph:v19, name=hardcore_ishizaka, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:05 compute-0 systemd[1]: libpod-conmon-2eaaf0e8b0c51487c9aa152e6ce52815e4863a24f3b71d44f720d5564edc8d00.scope: Deactivated successfully.
Oct  9 09:34:05 compute-0 podman[6790]: 2025-10-09 09:34:05.503542213 +0000 UTC m=+0.027224532 container create e4942e27c7ee5c3e4f59a16ae0b169de8cc1812bdfbe83d770ad37acfe10bf4c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:34:05 compute-0 systemd[1]: Started libpod-conmon-e4942e27c7ee5c3e4f59a16ae0b169de8cc1812bdfbe83d770ad37acfe10bf4c.scope.
Oct  9 09:34:05 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f15100a80b8d2a46dbbeaf7b87cb574f97d42c1d87abaa03e185a4b9f8dc4336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f15100a80b8d2a46dbbeaf7b87cb574f97d42c1d87abaa03e185a4b9f8dc4336/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f15100a80b8d2a46dbbeaf7b87cb574f97d42c1d87abaa03e185a4b9f8dc4336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:05 compute-0 podman[6790]: 2025-10-09 09:34:05.547086695 +0000 UTC m=+0.070769024 container init e4942e27c7ee5c3e4f59a16ae0b169de8cc1812bdfbe83d770ad37acfe10bf4c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:34:05 compute-0 podman[6790]: 2025-10-09 09:34:05.551647754 +0000 UTC m=+0.075330064 container start e4942e27c7ee5c3e4f59a16ae0b169de8cc1812bdfbe83d770ad37acfe10bf4c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:05 compute-0 podman[6790]: 2025-10-09 09:34:05.552633113 +0000 UTC m=+0.076315432 container attach e4942e27c7ee5c3e4f59a16ae0b169de8cc1812bdfbe83d770ad37acfe10bf4c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:05 compute-0 podman[6790]: 2025-10-09 09:34:05.493010696 +0000 UTC m=+0.016693034 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:05 compute-0 ceph-mgr[4772]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 09:34:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Oct  9 09:34:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2534801079' entity='client.admin' 
Oct  9 09:34:05 compute-0 pedantic_pike[6807]: set mgr/dashboard/cluster/status
Oct  9 09:34:05 compute-0 systemd[1]: libpod-e4942e27c7ee5c3e4f59a16ae0b169de8cc1812bdfbe83d770ad37acfe10bf4c.scope: Deactivated successfully.
Oct  9 09:34:05 compute-0 podman[6790]: 2025-10-09 09:34:05.91023854 +0000 UTC m=+0.433920859 container died e4942e27c7ee5c3e4f59a16ae0b169de8cc1812bdfbe83d770ad37acfe10bf4c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:34:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:05 compute-0 podman[6790]: 2025-10-09 09:34:05.935442622 +0000 UTC m=+0.459124941 container remove e4942e27c7ee5c3e4f59a16ae0b169de8cc1812bdfbe83d770ad37acfe10bf4c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 09:34:05 compute-0 systemd[1]: libpod-conmon-e4942e27c7ee5c3e4f59a16ae0b169de8cc1812bdfbe83d770ad37acfe10bf4c.scope: Deactivated successfully.
Oct  9 09:34:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:06 compute-0 podman[7027]: 2025-10-09 09:34:06.290732714 +0000 UTC m=+0.028462687 container create f8408c2e1a7d45aa9214ccdbd990a1b9d08e56d0f12adc5cf41bb917493a7599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:06 compute-0 systemd[1]: Started libpod-conmon-f8408c2e1a7d45aa9214ccdbd990a1b9d08e56d0f12adc5cf41bb917493a7599.scope.
Oct  9 09:34:06 compute-0 python3[7018]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:06 compute-0 podman[7027]: 2025-10-09 09:34:06.351853576 +0000 UTC m=+0.089583569 container init f8408c2e1a7d45aa9214ccdbd990a1b9d08e56d0f12adc5cf41bb917493a7599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kare, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:06 compute-0 podman[7027]: 2025-10-09 09:34:06.356720883 +0000 UTC m=+0.094450857 container start f8408c2e1a7d45aa9214ccdbd990a1b9d08e56d0f12adc5cf41bb917493a7599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kare, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:34:06 compute-0 podman[7027]: 2025-10-09 09:34:06.358625805 +0000 UTC m=+0.096355778 container attach f8408c2e1a7d45aa9214ccdbd990a1b9d08e56d0f12adc5cf41bb917493a7599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:34:06 compute-0 priceless_kare[7040]: 167 167
Oct  9 09:34:06 compute-0 systemd[1]: libpod-f8408c2e1a7d45aa9214ccdbd990a1b9d08e56d0f12adc5cf41bb917493a7599.scope: Deactivated successfully.
Oct  9 09:34:06 compute-0 conmon[7040]: conmon f8408c2e1a7d45aa9214 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8408c2e1a7d45aa9214ccdbd990a1b9d08e56d0f12adc5cf41bb917493a7599.scope/container/memory.events
Oct  9 09:34:06 compute-0 podman[7027]: 2025-10-09 09:34:06.360947703 +0000 UTC m=+0.098677676 container died f8408c2e1a7d45aa9214ccdbd990a1b9d08e56d0f12adc5cf41bb917493a7599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 09:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-84b38571ed4b8998d0dd54f2e8980e17fa999f366853b01fd2bddecf6799273d-merged.mount: Deactivated successfully.
Oct  9 09:34:06 compute-0 podman[7027]: 2025-10-09 09:34:06.278274364 +0000 UTC m=+0.016004357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:06 compute-0 podman[7027]: 2025-10-09 09:34:06.379895008 +0000 UTC m=+0.117624980 container remove f8408c2e1a7d45aa9214ccdbd990a1b9d08e56d0f12adc5cf41bb917493a7599 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 09:34:06 compute-0 podman[7043]: 2025-10-09 09:34:06.386983071 +0000 UTC m=+0.036679188 container create 68be8d5c560b7d9208c4e292b051b21131cf9107310eac2d50596f723bc9f7b2 (image=quay.io/ceph/ceph:v19, name=elastic_montalcini, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:34:06 compute-0 systemd[1]: libpod-conmon-f8408c2e1a7d45aa9214ccdbd990a1b9d08e56d0f12adc5cf41bb917493a7599.scope: Deactivated successfully.
Oct  9 09:34:06 compute-0 systemd[1]: Started libpod-conmon-68be8d5c560b7d9208c4e292b051b21131cf9107310eac2d50596f723bc9f7b2.scope.
Oct  9 09:34:06 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:06 compute-0 ceph-mon[4497]: Added label _admin to host compute-0
Oct  9 09:34:06 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/2534801079' entity='client.admin' 
Oct  9 09:34:06 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecb6bdfa234ec1b617ce256969ac7c89a86631da7466c928b7a2c9b3e034c62/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecb6bdfa234ec1b617ce256969ac7c89a86631da7466c928b7a2c9b3e034c62/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:06 compute-0 podman[7043]: 2025-10-09 09:34:06.435485091 +0000 UTC m=+0.085181219 container init 68be8d5c560b7d9208c4e292b051b21131cf9107310eac2d50596f723bc9f7b2 (image=quay.io/ceph/ceph:v19, name=elastic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:34:06 compute-0 podman[7043]: 2025-10-09 09:34:06.440113669 +0000 UTC m=+0.089809775 container start 68be8d5c560b7d9208c4e292b051b21131cf9107310eac2d50596f723bc9f7b2 (image=quay.io/ceph/ceph:v19, name=elastic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:06 compute-0 podman[7043]: 2025-10-09 09:34:06.441476288 +0000 UTC m=+0.091172395 container attach 68be8d5c560b7d9208c4e292b051b21131cf9107310eac2d50596f723bc9f7b2 (image=quay.io/ceph/ceph:v19, name=elastic_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  9 09:34:06 compute-0 podman[7043]: 2025-10-09 09:34:06.368070874 +0000 UTC m=+0.017767001 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:06 compute-0 podman[7080]: 2025-10-09 09:34:06.503187293 +0000 UTC m=+0.027260742 container create a52e859b2357f61ba068d9cebb1440b85d4e3cf089c8768c9c7871761dd28873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:06 compute-0 systemd[1]: Started libpod-conmon-a52e859b2357f61ba068d9cebb1440b85d4e3cf089c8768c9c7871761dd28873.scope.
Oct  9 09:34:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e830786b5d5b05c4247b78e889e9f050a0b68e68df24a848795cf3bc2a0bffac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e830786b5d5b05c4247b78e889e9f050a0b68e68df24a848795cf3bc2a0bffac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e830786b5d5b05c4247b78e889e9f050a0b68e68df24a848795cf3bc2a0bffac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e830786b5d5b05c4247b78e889e9f050a0b68e68df24a848795cf3bc2a0bffac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:06 compute-0 podman[7080]: 2025-10-09 09:34:06.571861687 +0000 UTC m=+0.095935136 container init a52e859b2357f61ba068d9cebb1440b85d4e3cf089c8768c9c7871761dd28873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:06 compute-0 podman[7080]: 2025-10-09 09:34:06.576219754 +0000 UTC m=+0.100293203 container start a52e859b2357f61ba068d9cebb1440b85d4e3cf089c8768c9c7871761dd28873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:34:06 compute-0 podman[7080]: 2025-10-09 09:34:06.577437019 +0000 UTC m=+0.101510468 container attach a52e859b2357f61ba068d9cebb1440b85d4e3cf089c8768c9c7871761dd28873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:06 compute-0 podman[7080]: 2025-10-09 09:34:06.491840809 +0000 UTC m=+0.015914278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Oct  9 09:34:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3222810685' entity='client.admin' 
Oct  9 09:34:06 compute-0 systemd[1]: libpod-68be8d5c560b7d9208c4e292b051b21131cf9107310eac2d50596f723bc9f7b2.scope: Deactivated successfully.
Oct  9 09:34:06 compute-0 podman[7043]: 2025-10-09 09:34:06.726580636 +0000 UTC m=+0.376276763 container died 68be8d5c560b7d9208c4e292b051b21131cf9107310eac2d50596f723bc9f7b2 (image=quay.io/ceph/ceph:v19, name=elastic_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 09:34:06 compute-0 podman[7043]: 2025-10-09 09:34:06.746557101 +0000 UTC m=+0.396253208 container remove 68be8d5c560b7d9208c4e292b051b21131cf9107310eac2d50596f723bc9f7b2 (image=quay.io/ceph/ceph:v19, name=elastic_montalcini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:34:06 compute-0 systemd[1]: libpod-conmon-68be8d5c560b7d9208c4e292b051b21131cf9107310eac2d50596f723bc9f7b2.scope: Deactivated successfully.
Oct  9 09:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ecb6bdfa234ec1b617ce256969ac7c89a86631da7466c928b7a2c9b3e034c62-merged.mount: Deactivated successfully.
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]: [
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:    {
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        "available": false,
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        "being_replaced": false,
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        "ceph_device_lvm": false,
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        "lsm_data": {},
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        "lvs": [],
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        "path": "/dev/sr0",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        "rejected_reasons": [
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "Insufficient space (<5GB)",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "Has a FileSystem"
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        ],
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        "sys_api": {
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "actuators": null,
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "device_nodes": [
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:                "sr0"
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            ],
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "devname": "sr0",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "human_readable_size": "474.00 KB",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "id_bus": "ata",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "model": "QEMU DVD-ROM",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "nr_requests": "64",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "parent": "/dev/sr0",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "partitions": {},
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "path": "/dev/sr0",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "removable": "1",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "rev": "2.5+",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "ro": "0",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "rotational": "0",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "sas_address": "",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "sas_device_handle": "",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "scheduler_mode": "mq-deadline",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "sectors": 0,
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "sectorsize": "2048",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "size": 485376.0,
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "support_discard": "2048",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "type": "disk",
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:            "vendor": "QEMU"
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:        }
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]:    }
Oct  9 09:34:07 compute-0 eloquent_torvalds[7112]: ]
Oct  9 09:34:07 compute-0 systemd[1]: libpod-a52e859b2357f61ba068d9cebb1440b85d4e3cf089c8768c9c7871761dd28873.scope: Deactivated successfully.
Oct  9 09:34:07 compute-0 conmon[7112]: conmon a52e859b2357f61ba068 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a52e859b2357f61ba068d9cebb1440b85d4e3cf089c8768c9c7871761dd28873.scope/container/memory.events
Oct  9 09:34:07 compute-0 podman[7080]: 2025-10-09 09:34:07.137783671 +0000 UTC m=+0.661857139 container died a52e859b2357f61ba068d9cebb1440b85d4e3cf089c8768c9c7871761dd28873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e830786b5d5b05c4247b78e889e9f050a0b68e68df24a848795cf3bc2a0bffac-merged.mount: Deactivated successfully.
Oct  9 09:34:07 compute-0 podman[7080]: 2025-10-09 09:34:07.159808738 +0000 UTC m=+0.683882187 container remove a52e859b2357f61ba068d9cebb1440b85d4e3cf089c8768c9c7871761dd28873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_torvalds, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:07 compute-0 systemd[1]: libpod-conmon-a52e859b2357f61ba068d9cebb1440b85d4e3cf089c8768c9c7871761dd28873.scope: Deactivated successfully.
Oct  9 09:34:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  9 09:34:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:34:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:07 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:34:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:07 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:34:07 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:34:07 compute-0 ansible-async_wrapper.py[8490]: Invoked with j292343572518 30 /home/zuul/.ansible/tmp/ansible-tmp-1760002447.180761-34084-90502129381613/AnsiballZ_command.py _
Oct  9 09:34:07 compute-0 ansible-async_wrapper.py[8568]: Starting module and watcher
Oct  9 09:34:07 compute-0 ansible-async_wrapper.py[8568]: Start watching 8569 (30)
Oct  9 09:34:07 compute-0 ansible-async_wrapper.py[8569]: Start module (8569)
Oct  9 09:34:07 compute-0 ansible-async_wrapper.py[8490]: Return async_wrapper task started.
Oct  9 09:34:07 compute-0 python3[8572]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:07 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:34:07 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:34:07 compute-0 podman[8636]: 2025-10-09 09:34:07.610202947 +0000 UTC m=+0.037124338 container create 6b0c0e8e38f3e249cd37f643025d9295fae02875157c73086e7744e61eee544f (image=quay.io/ceph/ceph:v19, name=serene_spence, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:34:07 compute-0 systemd[1]: Started libpod-conmon-6b0c0e8e38f3e249cd37f643025d9295fae02875157c73086e7744e61eee544f.scope.
Oct  9 09:34:07 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbfc1e327a7b5a299b542e48eb4f30d07dcccf2205e286bd30eb772b3e559d2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbfc1e327a7b5a299b542e48eb4f30d07dcccf2205e286bd30eb772b3e559d2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:07 compute-0 podman[8636]: 2025-10-09 09:34:07.668758733 +0000 UTC m=+0.095680134 container init 6b0c0e8e38f3e249cd37f643025d9295fae02875157c73086e7744e61eee544f (image=quay.io/ceph/ceph:v19, name=serene_spence, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 09:34:07 compute-0 podman[8636]: 2025-10-09 09:34:07.674095505 +0000 UTC m=+0.101016886 container start 6b0c0e8e38f3e249cd37f643025d9295fae02875157c73086e7744e61eee544f (image=quay.io/ceph/ceph:v19, name=serene_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:34:07 compute-0 podman[8636]: 2025-10-09 09:34:07.676325209 +0000 UTC m=+0.103246610 container attach 6b0c0e8e38f3e249cd37f643025d9295fae02875157c73086e7744e61eee544f (image=quay.io/ceph/ceph:v19, name=serene_spence, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:07 compute-0 podman[8636]: 2025-10-09 09:34:07.594836584 +0000 UTC m=+0.021757985 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:07 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3222810685' entity='client.admin' 
Oct  9 09:34:07 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:07 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:07 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:07 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:07 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:34:07 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:07 compute-0 ceph-mon[4497]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:34:07 compute-0 ceph-mgr[4772]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 09:34:07 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 09:34:07 compute-0 serene_spence[8684]: 
Oct  9 09:34:07 compute-0 serene_spence[8684]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  9 09:34:07 compute-0 podman[8636]: 2025-10-09 09:34:07.958985088 +0000 UTC m=+0.385906470 container died 6b0c0e8e38f3e249cd37f643025d9295fae02875157c73086e7744e61eee544f (image=quay.io/ceph/ceph:v19, name=serene_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 09:34:07 compute-0 systemd[1]: libpod-6b0c0e8e38f3e249cd37f643025d9295fae02875157c73086e7744e61eee544f.scope: Deactivated successfully.
Oct  9 09:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cbfc1e327a7b5a299b542e48eb4f30d07dcccf2205e286bd30eb772b3e559d2-merged.mount: Deactivated successfully.
Oct  9 09:34:07 compute-0 podman[8636]: 2025-10-09 09:34:07.982049316 +0000 UTC m=+0.408970697 container remove 6b0c0e8e38f3e249cd37f643025d9295fae02875157c73086e7744e61eee544f (image=quay.io/ceph/ceph:v19, name=serene_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:34:07 compute-0 systemd[1]: libpod-conmon-6b0c0e8e38f3e249cd37f643025d9295fae02875157c73086e7744e61eee544f.scope: Deactivated successfully.
Oct  9 09:34:07 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:34:07 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:34:08 compute-0 ansible-async_wrapper.py[8569]: Module complete (8569)
Oct  9 09:34:08 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:34:08 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:34:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:34:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:08 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 077f699d-f150-4c3a-8417-dca6004c7f5c (Updating crash deployment (+1 -> 1))
Oct  9 09:34:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  9 09:34:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:34:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 09:34:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:08 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:08 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct  9 09:34:08 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct  9 09:34:08 compute-0 ceph-mon[4497]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:34:08 compute-0 ceph-mon[4497]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:34:08 compute-0 ceph-mon[4497]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:34:08 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:08 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:08 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:08 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:34:08 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 09:34:08 compute-0 python3[9385]: ansible-ansible.legacy.async_status Invoked with jid=j292343572518.8490 mode=status _async_dir=/root/.ansible_async
Oct  9 09:34:08 compute-0 python3[9511]: ansible-ansible.legacy.async_status Invoked with jid=j292343572518.8490 mode=cleanup _async_dir=/root/.ansible_async
Oct  9 09:34:09 compute-0 podman[9545]: 2025-10-09 09:34:09.076700707 +0000 UTC m=+0.032680319 container create 5ad9abeb0fa6d84bfce6eca5cf9a23fbe97cec3a2a42434ef35eebd44472caf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  9 09:34:09 compute-0 systemd[1]: Started libpod-conmon-5ad9abeb0fa6d84bfce6eca5cf9a23fbe97cec3a2a42434ef35eebd44472caf0.scope.
Oct  9 09:34:09 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:09 compute-0 podman[9545]: 2025-10-09 09:34:09.134207686 +0000 UTC m=+0.090187307 container init 5ad9abeb0fa6d84bfce6eca5cf9a23fbe97cec3a2a42434ef35eebd44472caf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_vaughan, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:09 compute-0 podman[9545]: 2025-10-09 09:34:09.138560013 +0000 UTC m=+0.094539623 container start 5ad9abeb0fa6d84bfce6eca5cf9a23fbe97cec3a2a42434ef35eebd44472caf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_vaughan, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:09 compute-0 podman[9545]: 2025-10-09 09:34:09.139642263 +0000 UTC m=+0.095621874 container attach 5ad9abeb0fa6d84bfce6eca5cf9a23fbe97cec3a2a42434ef35eebd44472caf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_vaughan, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:09 compute-0 objective_vaughan[9558]: 167 167
Oct  9 09:34:09 compute-0 systemd[1]: libpod-5ad9abeb0fa6d84bfce6eca5cf9a23fbe97cec3a2a42434ef35eebd44472caf0.scope: Deactivated successfully.
Oct  9 09:34:09 compute-0 podman[9545]: 2025-10-09 09:34:09.14269051 +0000 UTC m=+0.098670122 container died 5ad9abeb0fa6d84bfce6eca5cf9a23fbe97cec3a2a42434ef35eebd44472caf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_vaughan, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7842cd1573f695af346ef99bcc1ca98342d6cdd3ba86906909b6eaf592d3db5-merged.mount: Deactivated successfully.
Oct  9 09:34:09 compute-0 podman[9545]: 2025-10-09 09:34:09.060913221 +0000 UTC m=+0.016892832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:09 compute-0 podman[9545]: 2025-10-09 09:34:09.167997767 +0000 UTC m=+0.123977368 container remove 5ad9abeb0fa6d84bfce6eca5cf9a23fbe97cec3a2a42434ef35eebd44472caf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:09 compute-0 systemd[1]: libpod-conmon-5ad9abeb0fa6d84bfce6eca5cf9a23fbe97cec3a2a42434ef35eebd44472caf0.scope: Deactivated successfully.
Oct  9 09:34:09 compute-0 systemd[1]: Reloading.
Oct  9 09:34:09 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:09 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:09 compute-0 systemd[1]: Reloading.
Oct  9 09:34:09 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:09 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:09 compute-0 python3[9636]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:34:09 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:34:09 compute-0 ceph-mon[4497]: Deploying daemon crash.compute-0 on compute-0
Oct  9 09:34:09 compute-0 podman[9717]: 2025-10-09 09:34:09.762853875 +0000 UTC m=+0.027163166 container create 69e1dc7590382a4ef96eca1e5114444734c8d38a4f8d3e6761414de89660049a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:34:09 compute-0 ceph-mgr[4772]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c8b64ffab7b96be7340e49df2a9da147888acff027771fd14156a24d62732b/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c8b64ffab7b96be7340e49df2a9da147888acff027771fd14156a24d62732b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c8b64ffab7b96be7340e49df2a9da147888acff027771fd14156a24d62732b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c8b64ffab7b96be7340e49df2a9da147888acff027771fd14156a24d62732b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:09 compute-0 podman[9717]: 2025-10-09 09:34:09.805114377 +0000 UTC m=+0.069423668 container init 69e1dc7590382a4ef96eca1e5114444734c8d38a4f8d3e6761414de89660049a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:34:09 compute-0 podman[9717]: 2025-10-09 09:34:09.809694011 +0000 UTC m=+0.074003292 container start 69e1dc7590382a4ef96eca1e5114444734c8d38a4f8d3e6761414de89660049a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:09 compute-0 bash[9717]: 69e1dc7590382a4ef96eca1e5114444734c8d38a4f8d3e6761414de89660049a
Oct  9 09:34:09 compute-0 podman[9717]: 2025-10-09 09:34:09.752196881 +0000 UTC m=+0.016506184 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:09 compute-0 systemd[1]: Started Ceph crash.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:34:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  9 09:34:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 09:34:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:09 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 077f699d-f150-4c3a-8417-dca6004c7f5c (Updating crash deployment (+1 -> 1))
Oct  9 09:34:09 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 077f699d-f150-4c3a-8417-dca6004c7f5c (Updating crash deployment (+1 -> 1)) in 1 seconds
Oct  9 09:34:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 09:34:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 09:34:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 09:34:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: 2025-10-09T09:34:09.929+0000 7f97ed709640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  9 09:34:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: 2025-10-09T09:34:09.929+0000 7f97ed709640 -1 AuthRegistry(0x7f97e8069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  9 09:34:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: 2025-10-09T09:34:09.930+0000 7f97ed709640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  9 09:34:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: 2025-10-09T09:34:09.930+0000 7f97ed709640 -1 AuthRegistry(0x7f97ed707ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  9 09:34:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: 2025-10-09T09:34:09.930+0000 7f97e6ffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  9 09:34:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: 2025-10-09T09:34:09.930+0000 7f97ed709640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  9 09:34:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  9 09:34:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  9 09:34:09 compute-0 python3[9761]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:10 compute-0 podman[9828]: 2025-10-09 09:34:10.015235165 +0000 UTC m=+0.031352055 container create 7e7509f4eb7fae13c3dd7039f15439201e11764a5d2636320181b0e83574eb8d (image=quay.io/ceph/ceph:v19, name=determined_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:10 compute-0 systemd[1]: Started libpod-conmon-7e7509f4eb7fae13c3dd7039f15439201e11764a5d2636320181b0e83574eb8d.scope.
Oct  9 09:34:10 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784fcfb9b237ace805adea29aadb7a22064d61326e421fd076858b45795ad0d8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784fcfb9b237ace805adea29aadb7a22064d61326e421fd076858b45795ad0d8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784fcfb9b237ace805adea29aadb7a22064d61326e421fd076858b45795ad0d8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:10 compute-0 podman[9828]: 2025-10-09 09:34:10.08267471 +0000 UTC m=+0.098791611 container init 7e7509f4eb7fae13c3dd7039f15439201e11764a5d2636320181b0e83574eb8d (image=quay.io/ceph/ceph:v19, name=determined_spence, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:10 compute-0 podman[9828]: 2025-10-09 09:34:10.088368337 +0000 UTC m=+0.104485216 container start 7e7509f4eb7fae13c3dd7039f15439201e11764a5d2636320181b0e83574eb8d (image=quay.io/ceph/ceph:v19, name=determined_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:10 compute-0 podman[9828]: 2025-10-09 09:34:10.097181082 +0000 UTC m=+0.113297962 container attach 7e7509f4eb7fae13c3dd7039f15439201e11764a5d2636320181b0e83574eb8d (image=quay.io/ceph/ceph:v19, name=determined_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 09:34:10 compute-0 podman[9828]: 2025-10-09 09:34:10.002922169 +0000 UTC m=+0.019039069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:10 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 09:34:10 compute-0 determined_spence[9859]: 
Oct  9 09:34:10 compute-0 determined_spence[9859]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  9 09:34:10 compute-0 systemd[1]: libpod-7e7509f4eb7fae13c3dd7039f15439201e11764a5d2636320181b0e83574eb8d.scope: Deactivated successfully.
Oct  9 09:34:10 compute-0 podman[9828]: 2025-10-09 09:34:10.371418881 +0000 UTC m=+0.387535752 container died 7e7509f4eb7fae13c3dd7039f15439201e11764a5d2636320181b0e83574eb8d (image=quay.io/ceph/ceph:v19, name=determined_spence, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-784fcfb9b237ace805adea29aadb7a22064d61326e421fd076858b45795ad0d8-merged.mount: Deactivated successfully.
Oct  9 09:34:10 compute-0 podman[9828]: 2025-10-09 09:34:10.395613069 +0000 UTC m=+0.411729950 container remove 7e7509f4eb7fae13c3dd7039f15439201e11764a5d2636320181b0e83574eb8d (image=quay.io/ceph/ceph:v19, name=determined_spence, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:34:10 compute-0 podman[9939]: 2025-10-09 09:34:10.401568378 +0000 UTC m=+0.053126440 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:10 compute-0 systemd[1]: libpod-conmon-7e7509f4eb7fae13c3dd7039f15439201e11764a5d2636320181b0e83574eb8d.scope: Deactivated successfully.
Oct  9 09:34:10 compute-0 podman[9939]: 2025-10-09 09:34:10.483525317 +0000 UTC m=+0.135083357 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct  9 09:34:10 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:10 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 09:34:10 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 09:34:10 compute-0 python3[10033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:10 compute-0 podman[10103]: 2025-10-09 09:34:10.791854855 +0000 UTC m=+0.027933739 container create 42808d7adc02499a1266a0c4806d87b08ba93641313dbc6ce80e002d89cc00d0 (image=quay.io/ceph/ceph:v19, name=epic_dirac, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Oct  9 09:34:10 compute-0 systemd[1]: Started libpod-conmon-42808d7adc02499a1266a0c4806d87b08ba93641313dbc6ce80e002d89cc00d0.scope.
Oct  9 09:34:10 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 1 completed events
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27e1a5e0f02f68ce5c8dacbfda7354479010c09d7fb7bd202476fbeb727e0802/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27e1a5e0f02f68ce5c8dacbfda7354479010c09d7fb7bd202476fbeb727e0802/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27e1a5e0f02f68ce5c8dacbfda7354479010c09d7fb7bd202476fbeb727e0802/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:10 compute-0 podman[10103]: 2025-10-09 09:34:10.8494506 +0000 UTC m=+0.085529484 container init 42808d7adc02499a1266a0c4806d87b08ba93641313dbc6ce80e002d89cc00d0 (image=quay.io/ceph/ceph:v19, name=epic_dirac, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:10 compute-0 podman[10103]: 2025-10-09 09:34:10.855203698 +0000 UTC m=+0.091282572 container start 42808d7adc02499a1266a0c4806d87b08ba93641313dbc6ce80e002d89cc00d0 (image=quay.io/ceph/ceph:v19, name=epic_dirac, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:34:10 compute-0 podman[10103]: 2025-10-09 09:34:10.856237567 +0000 UTC m=+0.092316441 container attach 42808d7adc02499a1266a0c4806d87b08ba93641313dbc6ce80e002d89cc00d0 (image=quay.io/ceph/ceph:v19, name=epic_dirac, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:34:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:10 compute-0 podman[10103]: 2025-10-09 09:34:10.780987445 +0000 UTC m=+0.017066340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:11 compute-0 podman[10157]: 2025-10-09 09:34:11.015199647 +0000 UTC m=+0.027821859 container create 3d298772b29382a22fa141e08279ed0fe41f1dad316debaed608881f659c2fd1 (image=quay.io/ceph/ceph:v19, name=recursing_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 09:34:11 compute-0 systemd[1]: Started libpod-conmon-3d298772b29382a22fa141e08279ed0fe41f1dad316debaed608881f659c2fd1.scope.
Oct  9 09:34:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:11 compute-0 podman[10157]: 2025-10-09 09:34:11.059354431 +0000 UTC m=+0.071976652 container init 3d298772b29382a22fa141e08279ed0fe41f1dad316debaed608881f659c2fd1 (image=quay.io/ceph/ceph:v19, name=recursing_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:11 compute-0 podman[10157]: 2025-10-09 09:34:11.064018043 +0000 UTC m=+0.076640245 container start 3d298772b29382a22fa141e08279ed0fe41f1dad316debaed608881f659c2fd1 (image=quay.io/ceph/ceph:v19, name=recursing_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:11 compute-0 podman[10157]: 2025-10-09 09:34:11.065223697 +0000 UTC m=+0.077845898 container attach 3d298772b29382a22fa141e08279ed0fe41f1dad316debaed608881f659c2fd1 (image=quay.io/ceph/ceph:v19, name=recursing_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 09:34:11 compute-0 recursing_herschel[10171]: 167 167
Oct  9 09:34:11 compute-0 systemd[1]: libpod-3d298772b29382a22fa141e08279ed0fe41f1dad316debaed608881f659c2fd1.scope: Deactivated successfully.
Oct  9 09:34:11 compute-0 podman[10176]: 2025-10-09 09:34:11.098055189 +0000 UTC m=+0.018171302 container died 3d298772b29382a22fa141e08279ed0fe41f1dad316debaed608881f659c2fd1 (image=quay.io/ceph/ceph:v19, name=recursing_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:34:11 compute-0 podman[10157]: 2025-10-09 09:34:11.003249154 +0000 UTC m=+0.015871377 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaf2e6866b09b60405d49e5708ef008f23ecd6d887d3d69a97e77663d1bcc50a-merged.mount: Deactivated successfully.
Oct  9 09:34:11 compute-0 podman[10176]: 2025-10-09 09:34:11.117694359 +0000 UTC m=+0.037810472 container remove 3d298772b29382a22fa141e08279ed0fe41f1dad316debaed608881f659c2fd1 (image=quay.io/ceph/ceph:v19, name=recursing_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:34:11 compute-0 systemd[1]: libpod-conmon-3d298772b29382a22fa141e08279ed0fe41f1dad316debaed608881f659c2fd1.scope: Deactivated successfully.
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:11 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.lwqgfy (unknown last config time)...
Oct  9 09:34:11 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.lwqgfy (unknown last config time)...
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.lwqgfy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.lwqgfy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3512036144' entity='client.admin' 
Oct  9 09:34:11 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.lwqgfy on compute-0
Oct  9 09:34:11 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.lwqgfy on compute-0
Oct  9 09:34:11 compute-0 systemd[1]: libpod-42808d7adc02499a1266a0c4806d87b08ba93641313dbc6ce80e002d89cc00d0.scope: Deactivated successfully.
Oct  9 09:34:11 compute-0 podman[10103]: 2025-10-09 09:34:11.183034096 +0000 UTC m=+0.419112960 container died 42808d7adc02499a1266a0c4806d87b08ba93641313dbc6ce80e002d89cc00d0 (image=quay.io/ceph/ceph:v19, name=epic_dirac, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-27e1a5e0f02f68ce5c8dacbfda7354479010c09d7fb7bd202476fbeb727e0802-merged.mount: Deactivated successfully.
Oct  9 09:34:11 compute-0 podman[10103]: 2025-10-09 09:34:11.202772462 +0000 UTC m=+0.438851336 container remove 42808d7adc02499a1266a0c4806d87b08ba93641313dbc6ce80e002d89cc00d0 (image=quay.io/ceph/ceph:v19, name=epic_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:34:11 compute-0 systemd[1]: libpod-conmon-42808d7adc02499a1266a0c4806d87b08ba93641313dbc6ce80e002d89cc00d0.scope: Deactivated successfully.
Oct  9 09:34:11 compute-0 python3[10273]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:11 compute-0 podman[10284]: 2025-10-09 09:34:11.471152486 +0000 UTC m=+0.028381955 container create b2e838f46dedeeca4edb918cfb29f64f9db4977cd63d6952279af454c164db3c (image=quay.io/ceph/ceph:v19, name=thirsty_galileo, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 09:34:11 compute-0 systemd[1]: Started libpod-conmon-b2e838f46dedeeca4edb918cfb29f64f9db4977cd63d6952279af454c164db3c.scope.
Oct  9 09:34:11 compute-0 podman[10297]: 2025-10-09 09:34:11.495578211 +0000 UTC m=+0.028702759 container create e91dbafeea17f3b4a5eff476311ca72f43c6fa9da3aff8915f098e1a13c4d91c (image=quay.io/ceph/ceph:v19, name=awesome_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:11 compute-0 systemd[1]: Started libpod-conmon-e91dbafeea17f3b4a5eff476311ca72f43c6fa9da3aff8915f098e1a13c4d91c.scope.
Oct  9 09:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5e729636dee3f7d2059d38e79b6d1883bc0968873f15d3ebee2eab22230783/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5e729636dee3f7d2059d38e79b6d1883bc0968873f15d3ebee2eab22230783/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d5e729636dee3f7d2059d38e79b6d1883bc0968873f15d3ebee2eab22230783/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:11 compute-0 podman[10284]: 2025-10-09 09:34:11.534102859 +0000 UTC m=+0.091332346 container init b2e838f46dedeeca4edb918cfb29f64f9db4977cd63d6952279af454c164db3c (image=quay.io/ceph/ceph:v19, name=thirsty_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:11 compute-0 podman[10284]: 2025-10-09 09:34:11.538496031 +0000 UTC m=+0.095725489 container start b2e838f46dedeeca4edb918cfb29f64f9db4977cd63d6952279af454c164db3c (image=quay.io/ceph/ceph:v19, name=thirsty_galileo, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 09:34:11 compute-0 podman[10284]: 2025-10-09 09:34:11.539435774 +0000 UTC m=+0.096665241 container attach b2e838f46dedeeca4edb918cfb29f64f9db4977cd63d6952279af454c164db3c (image=quay.io/ceph/ceph:v19, name=thirsty_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  9 09:34:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:11 compute-0 podman[10297]: 2025-10-09 09:34:11.549868625 +0000 UTC m=+0.082993163 container init e91dbafeea17f3b4a5eff476311ca72f43c6fa9da3aff8915f098e1a13c4d91c (image=quay.io/ceph/ceph:v19, name=awesome_heyrovsky, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:11 compute-0 podman[10297]: 2025-10-09 09:34:11.5538015 +0000 UTC m=+0.086926037 container start e91dbafeea17f3b4a5eff476311ca72f43c6fa9da3aff8915f098e1a13c4d91c (image=quay.io/ceph/ceph:v19, name=awesome_heyrovsky, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:34:11 compute-0 podman[10297]: 2025-10-09 09:34:11.554839827 +0000 UTC m=+0.087964375 container attach e91dbafeea17f3b4a5eff476311ca72f43c6fa9da3aff8915f098e1a13c4d91c (image=quay.io/ceph/ceph:v19, name=awesome_heyrovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:11 compute-0 awesome_heyrovsky[10316]: 167 167
Oct  9 09:34:11 compute-0 podman[10284]: 2025-10-09 09:34:11.458842276 +0000 UTC m=+0.016071754 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:11 compute-0 systemd[1]: libpod-e91dbafeea17f3b4a5eff476311ca72f43c6fa9da3aff8915f098e1a13c4d91c.scope: Deactivated successfully.
Oct  9 09:34:11 compute-0 podman[10297]: 2025-10-09 09:34:11.556746673 +0000 UTC m=+0.089871211 container died e91dbafeea17f3b4a5eff476311ca72f43c6fa9da3aff8915f098e1a13c4d91c (image=quay.io/ceph/ceph:v19, name=awesome_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 09:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5ad89fdbd497c36cbf797bf38b7b9be642772a9faefbd593b10afa6334265c8-merged.mount: Deactivated successfully.
Oct  9 09:34:11 compute-0 podman[10297]: 2025-10-09 09:34:11.58087699 +0000 UTC m=+0.114001528 container remove e91dbafeea17f3b4a5eff476311ca72f43c6fa9da3aff8915f098e1a13c4d91c (image=quay.io/ceph/ceph:v19, name=awesome_heyrovsky, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:11 compute-0 podman[10297]: 2025-10-09 09:34:11.484740827 +0000 UTC m=+0.017865375 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:11 compute-0 systemd[1]: libpod-conmon-e91dbafeea17f3b4a5eff476311ca72f43c6fa9da3aff8915f098e1a13c4d91c.scope: Deactivated successfully.
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:11 compute-0 ceph-mgr[4772]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3154809530' entity='client.admin' 
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:34:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:11 compute-0 systemd[1]: libpod-b2e838f46dedeeca4edb918cfb29f64f9db4977cd63d6952279af454c164db3c.scope: Deactivated successfully.
Oct  9 09:34:11 compute-0 podman[10284]: 2025-10-09 09:34:11.829004029 +0000 UTC m=+0.386233496 container died b2e838f46dedeeca4edb918cfb29f64f9db4977cd63d6952279af454c164db3c (image=quay.io/ceph/ceph:v19, name=thirsty_galileo, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d5e729636dee3f7d2059d38e79b6d1883bc0968873f15d3ebee2eab22230783-merged.mount: Deactivated successfully.
Oct  9 09:34:11 compute-0 podman[10284]: 2025-10-09 09:34:11.852345318 +0000 UTC m=+0.409574785 container remove b2e838f46dedeeca4edb918cfb29f64f9db4977cd63d6952279af454c164db3c (image=quay.io/ceph/ceph:v19, name=thirsty_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 09:34:11 compute-0 systemd[1]: libpod-conmon-b2e838f46dedeeca4edb918cfb29f64f9db4977cd63d6952279af454c164db3c.scope: Deactivated successfully.
Oct  9 09:34:12 compute-0 python3[10437]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:12 compute-0 podman[10438]: 2025-10-09 09:34:12.156067119 +0000 UTC m=+0.029216028 container create 03730b5340b52f74dab21c8db7278836479f4af808c7fc2ecddd2f6937052279 (image=quay.io/ceph/ceph:v19, name=hardcore_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:12 compute-0 ceph-mon[4497]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct  9 09:34:12 compute-0 ceph-mon[4497]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:12 compute-0 ceph-mon[4497]: Reconfiguring mgr.compute-0.lwqgfy (unknown last config time)...
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.lwqgfy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3512036144' entity='client.admin' 
Oct  9 09:34:12 compute-0 ceph-mon[4497]: Reconfiguring daemon mgr.compute-0.lwqgfy on compute-0
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3154809530' entity='client.admin' 
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:12 compute-0 systemd[1]: Started libpod-conmon-03730b5340b52f74dab21c8db7278836479f4af808c7fc2ecddd2f6937052279.scope.
Oct  9 09:34:12 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97e9d5b993adb8fec1f2b45285a977fd4428fda3a20f2251a8fff6111369409/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97e9d5b993adb8fec1f2b45285a977fd4428fda3a20f2251a8fff6111369409/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97e9d5b993adb8fec1f2b45285a977fd4428fda3a20f2251a8fff6111369409/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:12 compute-0 podman[10438]: 2025-10-09 09:34:12.212285397 +0000 UTC m=+0.085434306 container init 03730b5340b52f74dab21c8db7278836479f4af808c7fc2ecddd2f6937052279 (image=quay.io/ceph/ceph:v19, name=hardcore_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:12 compute-0 podman[10438]: 2025-10-09 09:34:12.217054359 +0000 UTC m=+0.090203268 container start 03730b5340b52f74dab21c8db7278836479f4af808c7fc2ecddd2f6937052279 (image=quay.io/ceph/ceph:v19, name=hardcore_wilson, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:12 compute-0 podman[10438]: 2025-10-09 09:34:12.218457054 +0000 UTC m=+0.091605963 container attach 03730b5340b52f74dab21c8db7278836479f4af808c7fc2ecddd2f6937052279 (image=quay.io/ceph/ceph:v19, name=hardcore_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:12 compute-0 podman[10438]: 2025-10-09 09:34:12.143818535 +0000 UTC m=+0.016967444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:12 compute-0 ansible-async_wrapper.py[8568]: Done in kid B.
Oct  9 09:34:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Oct  9 09:34:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/474857647' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  9 09:34:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct  9 09:34:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 09:34:13 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/474857647' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  9 09:34:13 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/474857647' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  9 09:34:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct  9 09:34:13 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct  9 09:34:13 compute-0 hardcore_wilson[10450]: set require_min_compat_client to mimic
Oct  9 09:34:13 compute-0 systemd[1]: libpod-03730b5340b52f74dab21c8db7278836479f4af808c7fc2ecddd2f6937052279.scope: Deactivated successfully.
Oct  9 09:34:13 compute-0 conmon[10450]: conmon 03730b5340b52f74dab2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03730b5340b52f74dab21c8db7278836479f4af808c7fc2ecddd2f6937052279.scope/container/memory.events
Oct  9 09:34:13 compute-0 podman[10438]: 2025-10-09 09:34:13.177649752 +0000 UTC m=+1.050798660 container died 03730b5340b52f74dab21c8db7278836479f4af808c7fc2ecddd2f6937052279 (image=quay.io/ceph/ceph:v19, name=hardcore_wilson, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c97e9d5b993adb8fec1f2b45285a977fd4428fda3a20f2251a8fff6111369409-merged.mount: Deactivated successfully.
Oct  9 09:34:13 compute-0 podman[10438]: 2025-10-09 09:34:13.194537272 +0000 UTC m=+1.067686181 container remove 03730b5340b52f74dab21c8db7278836479f4af808c7fc2ecddd2f6937052279 (image=quay.io/ceph/ceph:v19, name=hardcore_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:13 compute-0 systemd[1]: libpod-conmon-03730b5340b52f74dab21c8db7278836479f4af808c7fc2ecddd2f6937052279.scope: Deactivated successfully.
Oct  9 09:34:13 compute-0 python3[10511]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:13 compute-0 podman[10512]: 2025-10-09 09:34:13.704453979 +0000 UTC m=+0.027637370 container create 5747c75a6f653771ff76e37da13ebf9116983e1f24c6c15e03d3fc1938853d3a (image=quay.io/ceph/ceph:v19, name=wizardly_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:34:13 compute-0 systemd[1]: Started libpod-conmon-5747c75a6f653771ff76e37da13ebf9116983e1f24c6c15e03d3fc1938853d3a.scope.
Oct  9 09:34:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d20ed7935c52e21815118a59e7d2eb09de655d7ea71ecaf4fa6ccc1d7ff985d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d20ed7935c52e21815118a59e7d2eb09de655d7ea71ecaf4fa6ccc1d7ff985d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d20ed7935c52e21815118a59e7d2eb09de655d7ea71ecaf4fa6ccc1d7ff985d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:13 compute-0 podman[10512]: 2025-10-09 09:34:13.749810559 +0000 UTC m=+0.072993969 container init 5747c75a6f653771ff76e37da13ebf9116983e1f24c6c15e03d3fc1938853d3a (image=quay.io/ceph/ceph:v19, name=wizardly_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:34:13 compute-0 podman[10512]: 2025-10-09 09:34:13.753966534 +0000 UTC m=+0.077149925 container start 5747c75a6f653771ff76e37da13ebf9116983e1f24c6c15e03d3fc1938853d3a (image=quay.io/ceph/ceph:v19, name=wizardly_almeida, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 09:34:13 compute-0 podman[10512]: 2025-10-09 09:34:13.755218535 +0000 UTC m=+0.078401946 container attach 5747c75a6f653771ff76e37da13ebf9116983e1f24c6c15e03d3fc1938853d3a (image=quay.io/ceph/ceph:v19, name=wizardly_almeida, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:34:13 compute-0 podman[10512]: 2025-10-09 09:34:13.692781703 +0000 UTC m=+0.015965112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:13 compute-0 ceph-mgr[4772]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  9 09:34:14 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:34:14 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/474857647' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  9 09:34:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 09:34:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 09:34:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 09:34:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 09:34:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:14 compute-0 ceph-mgr[4772]: [cephadm INFO root] Added host compute-0
Oct  9 09:34:14 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  9 09:34:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:34:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:34:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:15 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Oct  9 09:34:15 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Oct  9 09:34:15 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:15 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:15 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:15 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:15 compute-0 ceph-mon[4497]: Added host compute-0
Oct  9 09:34:15 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:15 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:15 compute-0 ceph-mgr[4772]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct  9 09:34:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:15 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  9 09:34:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:16 compute-0 ceph-mon[4497]: Deploying cephadm binary to compute-1
Oct  9 09:34:16 compute-0 ceph-mon[4497]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  9 09:34:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 09:34:18 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:18 compute-0 ceph-mgr[4772]: [cephadm INFO root] Added host compute-1
Oct  9 09:34:18 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Added host compute-1
Oct  9 09:34:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:18 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:18 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:19 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:19 compute-0 ceph-mon[4497]: Added host compute-1
Oct  9 09:34:19 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:19 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:19 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Oct  9 09:34:19 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Oct  9 09:34:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:19 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:20 compute-0 ceph-mon[4497]: Deploying cephadm binary to compute-2
Oct  9 09:34:20 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:20 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:21 compute-0 chronyd[804]: Selected source 69.176.84.79 (pool.ntp.org)
Oct  9 09:34:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Oct  9 09:34:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: [cephadm INFO root] Added host compute-2
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Added host compute-2
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Oct  9 09:34:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 09:34:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct  9 09:34:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 09:34:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct  9 09:34:22 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct  9 09:34:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Oct  9 09:34:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:22 compute-0 wizardly_almeida[10524]: Added host 'compute-0' with addr '192.168.122.100'
Oct  9 09:34:22 compute-0 wizardly_almeida[10524]: Added host 'compute-1' with addr '192.168.122.101'
Oct  9 09:34:22 compute-0 wizardly_almeida[10524]: Added host 'compute-2' with addr '192.168.122.102'
Oct  9 09:34:22 compute-0 wizardly_almeida[10524]: Scheduled mon update...
Oct  9 09:34:22 compute-0 wizardly_almeida[10524]: Scheduled mgr update...
Oct  9 09:34:22 compute-0 wizardly_almeida[10524]: Scheduled osd.default_drive_group update...
Oct  9 09:34:22 compute-0 systemd[1]: libpod-5747c75a6f653771ff76e37da13ebf9116983e1f24c6c15e03d3fc1938853d3a.scope: Deactivated successfully.
Oct  9 09:34:22 compute-0 podman[10512]: 2025-10-09 09:34:22.357174751 +0000 UTC m=+8.680358142 container died 5747c75a6f653771ff76e37da13ebf9116983e1f24c6c15e03d3fc1938853d3a (image=quay.io/ceph/ceph:v19, name=wizardly_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d20ed7935c52e21815118a59e7d2eb09de655d7ea71ecaf4fa6ccc1d7ff985d-merged.mount: Deactivated successfully.
Oct  9 09:34:22 compute-0 podman[10512]: 2025-10-09 09:34:22.376697075 +0000 UTC m=+8.699880468 container remove 5747c75a6f653771ff76e37da13ebf9116983e1f24c6c15e03d3fc1938853d3a (image=quay.io/ceph/ceph:v19, name=wizardly_almeida, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:22 compute-0 systemd[1]: libpod-conmon-5747c75a6f653771ff76e37da13ebf9116983e1f24c6c15e03d3fc1938853d3a.scope: Deactivated successfully.
Oct  9 09:34:22 compute-0 python3[10677]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:22 compute-0 podman[10679]: 2025-10-09 09:34:22.73859858 +0000 UTC m=+0.029322577 container create 93186f0a1c6c7e1b40ab6ea4c996ea34e6def956d0b499d7bdd17b71aae3326d (image=quay.io/ceph/ceph:v19, name=focused_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:34:22 compute-0 systemd[1]: Started libpod-conmon-93186f0a1c6c7e1b40ab6ea4c996ea34e6def956d0b499d7bdd17b71aae3326d.scope.
Oct  9 09:34:22 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb932f57b64fc61438dfad8f06d1fbac7898cfb4b108c12985899a7d90a3c83f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb932f57b64fc61438dfad8f06d1fbac7898cfb4b108c12985899a7d90a3c83f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb932f57b64fc61438dfad8f06d1fbac7898cfb4b108c12985899a7d90a3c83f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:22 compute-0 podman[10679]: 2025-10-09 09:34:22.798268008 +0000 UTC m=+0.088992015 container init 93186f0a1c6c7e1b40ab6ea4c996ea34e6def956d0b499d7bdd17b71aae3326d (image=quay.io/ceph/ceph:v19, name=focused_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:22 compute-0 podman[10679]: 2025-10-09 09:34:22.80271763 +0000 UTC m=+0.093441616 container start 93186f0a1c6c7e1b40ab6ea4c996ea34e6def956d0b499d7bdd17b71aae3326d (image=quay.io/ceph/ceph:v19, name=focused_ganguly, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:22 compute-0 podman[10679]: 2025-10-09 09:34:22.803757892 +0000 UTC m=+0.094481877 container attach 93186f0a1c6c7e1b40ab6ea4c996ea34e6def956d0b499d7bdd17b71aae3326d (image=quay.io/ceph/ceph:v19, name=focused_ganguly, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:22 compute-0 podman[10679]: 2025-10-09 09:34:22.726588169 +0000 UTC m=+0.017312177 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  9 09:34:23 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870211017' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  9 09:34:23 compute-0 focused_ganguly[10692]: 
Oct  9 09:34:23 compute-0 focused_ganguly[10692]: {"fsid":"286f8bf0-da72-5823-9a4e-ac4457d9e609","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":42,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-09T09:33:39:705322+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-09T09:33:39.706205+0000","services":{}},"progress_events":{}}
Oct  9 09:34:23 compute-0 systemd[1]: libpod-93186f0a1c6c7e1b40ab6ea4c996ea34e6def956d0b499d7bdd17b71aae3326d.scope: Deactivated successfully.
Oct  9 09:34:23 compute-0 conmon[10692]: conmon 93186f0a1c6c7e1b40ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93186f0a1c6c7e1b40ab6ea4c996ea34e6def956d0b499d7bdd17b71aae3326d.scope/container/memory.events
Oct  9 09:34:23 compute-0 podman[10679]: 2025-10-09 09:34:23.15685293 +0000 UTC m=+0.447576916 container died 93186f0a1c6c7e1b40ab6ea4c996ea34e6def956d0b499d7bdd17b71aae3326d (image=quay.io/ceph/ceph:v19, name=focused_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb932f57b64fc61438dfad8f06d1fbac7898cfb4b108c12985899a7d90a3c83f-merged.mount: Deactivated successfully.
Oct  9 09:34:23 compute-0 podman[10679]: 2025-10-09 09:34:23.175685246 +0000 UTC m=+0.466409221 container remove 93186f0a1c6c7e1b40ab6ea4c996ea34e6def956d0b499d7bdd17b71aae3326d (image=quay.io/ceph/ceph:v19, name=focused_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:34:23 compute-0 systemd[1]: libpod-conmon-93186f0a1c6c7e1b40ab6ea4c996ea34e6def956d0b499d7bdd17b71aae3326d.scope: Deactivated successfully.
Oct  9 09:34:23 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:23 compute-0 ceph-mon[4497]: Added host compute-2
Oct  9 09:34:23 compute-0 ceph-mon[4497]: Saving service mon spec with placement compute-0;compute-1;compute-2
Oct  9 09:34:23 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:23 compute-0 ceph-mon[4497]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Oct  9 09:34:23 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:23 compute-0 ceph-mon[4497]: Marking host: compute-0 for OSDSpec preview refresh.
Oct  9 09:34:23 compute-0 ceph-mon[4497]: Marking host: compute-1 for OSDSpec preview refresh.
Oct  9 09:34:23 compute-0 ceph-mon[4497]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Oct  9 09:34:23 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:34:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:34:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:34:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:34:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:34:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:34:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:30 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:34:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:34:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  9 09:34:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:34:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:37 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:34:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:34:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:34:37 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:37 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:37 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:37 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:37 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:34:37 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:34:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:34:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:34:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:34:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:34:38 compute-0 ceph-mon[4497]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:34:38 compute-0 ceph-mon[4497]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:34:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:34:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:34:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:34:38.579+0000 7f4e3bf87640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 86d17afb-9f9f-4e99-9155-379d3071f0d9 (Updating crash deployment (+1 -> 2))
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: service_name: mon
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: placement:
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  hosts:
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  - compute-0
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  - compute-1
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  - compute-2
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:34:38.580+0000 7f4e3bf87640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: service_name: mgr
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: placement:
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  hosts:
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  - compute-0
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  - compute-1
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  - compute-2
Oct  9 09:34:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  9 09:34:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  9 09:34:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:34:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 09:34:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:38 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Oct  9 09:34:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Oct  9 09:34:39 compute-0 ceph-mon[4497]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:34:39 compute-0 ceph-mon[4497]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:34:39 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:39 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:39 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:39 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:34:39 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 09:34:39 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:40 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 86d17afb-9f9f-4e99-9155-379d3071f0d9 (Updating crash deployment (+1 -> 2))
Oct  9 09:34:40 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 86d17afb-9f9f-4e99-9155-379d3071f0d9 (Updating crash deployment (+1 -> 2)) in 2 seconds
Oct  9 09:34:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:34:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:34:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:34:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:40 compute-0 ceph-mon[4497]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  9 09:34:40 compute-0 ceph-mon[4497]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  9 09:34:40 compute-0 ceph-mon[4497]: Deploying daemon crash.compute-1 on compute-1
Oct  9 09:34:40 compute-0 ceph-mon[4497]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:40 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:40 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:40 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:40 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:34:40 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:34:40 compute-0 irqbalance[794]: Cannot change IRQ 44 affinity: Operation not permitted
Oct  9 09:34:40 compute-0 irqbalance[794]: IRQ 44 affinity is now unmanaged
Oct  9 09:34:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:40 compute-0 podman[10807]: 2025-10-09 09:34:40.632266631 +0000 UTC m=+0.026602138 container create 5fae32ab4b97c40d607e6abcccbf8fb307eef37b1f305e8b8d7d2c2b933e8a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:40 compute-0 systemd[1]: Started libpod-conmon-5fae32ab4b97c40d607e6abcccbf8fb307eef37b1f305e8b8d7d2c2b933e8a2a.scope.
Oct  9 09:34:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:40 compute-0 podman[10807]: 2025-10-09 09:34:40.679184995 +0000 UTC m=+0.073520502 container init 5fae32ab4b97c40d607e6abcccbf8fb307eef37b1f305e8b8d7d2c2b933e8a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wozniak, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:40 compute-0 podman[10807]: 2025-10-09 09:34:40.683073077 +0000 UTC m=+0.077408583 container start 5fae32ab4b97c40d607e6abcccbf8fb307eef37b1f305e8b8d7d2c2b933e8a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wozniak, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:40 compute-0 podman[10807]: 2025-10-09 09:34:40.684245787 +0000 UTC m=+0.078581295 container attach 5fae32ab4b97c40d607e6abcccbf8fb307eef37b1f305e8b8d7d2c2b933e8a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 09:34:40 compute-0 charming_wozniak[10820]: 167 167
Oct  9 09:34:40 compute-0 systemd[1]: libpod-5fae32ab4b97c40d607e6abcccbf8fb307eef37b1f305e8b8d7d2c2b933e8a2a.scope: Deactivated successfully.
Oct  9 09:34:40 compute-0 podman[10807]: 2025-10-09 09:34:40.686668305 +0000 UTC m=+0.081003822 container died 5fae32ab4b97c40d607e6abcccbf8fb307eef37b1f305e8b8d7d2c2b933e8a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11f9e32d82ff4d4881e2361478f0d975d5dda536d794f33ae590dad74bf2583-merged.mount: Deactivated successfully.
Oct  9 09:34:40 compute-0 podman[10807]: 2025-10-09 09:34:40.702680195 +0000 UTC m=+0.097015702 container remove 5fae32ab4b97c40d607e6abcccbf8fb307eef37b1f305e8b8d7d2c2b933e8a2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_wozniak, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:40 compute-0 podman[10807]: 2025-10-09 09:34:40.621408279 +0000 UTC m=+0.015743786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:40 compute-0 systemd[1]: libpod-conmon-5fae32ab4b97c40d607e6abcccbf8fb307eef37b1f305e8b8d7d2c2b933e8a2a.scope: Deactivated successfully.
Oct  9 09:34:40 compute-0 podman[10841]: 2025-10-09 09:34:40.813581638 +0000 UTC m=+0.027979827 container create fbc016a43e8b07b268fa9e4a11fa41a6cfee8ab9b64dc2afdd583326392eef06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:34:40 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 2 completed events
Oct  9 09:34:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:34:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:40 compute-0 systemd[1]: Started libpod-conmon-fbc016a43e8b07b268fa9e4a11fa41a6cfee8ab9b64dc2afdd583326392eef06.scope.
Oct  9 09:34:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7f47b926eac936ee2157423b48baed83ed8b8fd9041cb994d01a66ee15d019/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7f47b926eac936ee2157423b48baed83ed8b8fd9041cb994d01a66ee15d019/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7f47b926eac936ee2157423b48baed83ed8b8fd9041cb994d01a66ee15d019/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7f47b926eac936ee2157423b48baed83ed8b8fd9041cb994d01a66ee15d019/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7f47b926eac936ee2157423b48baed83ed8b8fd9041cb994d01a66ee15d019/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-0 podman[10841]: 2025-10-09 09:34:40.863512882 +0000 UTC m=+0.077911071 container init fbc016a43e8b07b268fa9e4a11fa41a6cfee8ab9b64dc2afdd583326392eef06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:40 compute-0 podman[10841]: 2025-10-09 09:34:40.868629549 +0000 UTC m=+0.083027728 container start fbc016a43e8b07b268fa9e4a11fa41a6cfee8ab9b64dc2afdd583326392eef06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:34:40 compute-0 podman[10841]: 2025-10-09 09:34:40.870371213 +0000 UTC m=+0.084769392 container attach fbc016a43e8b07b268fa9e4a11fa41a6cfee8ab9b64dc2afdd583326392eef06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:34:40 compute-0 podman[10841]: 2025-10-09 09:34:40.802469726 +0000 UTC m=+0.016867905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c1284347-e90b-4f83-b56e-ee0190c7ef56
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "6a6825df-a8f3-41ad-b7ed-1604f01d2f74"} v 0)
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4063109686' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6a6825df-a8f3-41ad-b7ed-1604f01d2f74"}]: dispatch
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/4063109686' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6a6825df-a8f3-41ad-b7ed-1604f01d2f74"}]': finished
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:34:41 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c1284347-e90b-4f83-b56e-ee0190c7ef56"} v 0)
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3555096505' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1284347-e90b-4f83-b56e-ee0190c7ef56"}]: dispatch
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3555096505' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c1284347-e90b-4f83-b56e-ee0190c7ef56"}]': finished
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 09:34:41 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 09:34:41 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct  9 09:34:41 compute-0 lvm[10915]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:34:41 compute-0 lvm[10915]: VG ceph_vg0 finished
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct  9 09:34:41 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:41 compute-0 ceph-mon[4497]: from='client.? 192.168.122.101:0/4063109686' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6a6825df-a8f3-41ad-b7ed-1604f01d2f74"}]: dispatch
Oct  9 09:34:41 compute-0 ceph-mon[4497]: from='client.? 192.168.122.101:0/4063109686' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6a6825df-a8f3-41ad-b7ed-1604f01d2f74"}]': finished
Oct  9 09:34:41 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3555096505' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c1284347-e90b-4f83-b56e-ee0190c7ef56"}]: dispatch
Oct  9 09:34:41 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3555096505' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c1284347-e90b-4f83-b56e-ee0190c7ef56"}]': finished
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/747412709' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  9 09:34:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct  9 09:34:41 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672055962' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: stderr: got monmap epoch 1
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: --> Creating keyring file for osd.1
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct  9 09:34:41 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid c1284347-e90b-4f83-b56e-ee0190c7ef56 --setuser ceph --setgroup ceph
Oct  9 09:34:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:42 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  9 09:34:43 compute-0 ceph-mon[4497]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  9 09:34:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:44 compute-0 vigilant_hofstadter[10854]: stderr: 2025-10-09T09:34:42.008+0000 7ff31f12d740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Oct  9 09:34:44 compute-0 vigilant_hofstadter[10854]: stderr: 2025-10-09T09:34:42.270+0000 7ff31f12d740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct  9 09:34:44 compute-0 vigilant_hofstadter[10854]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  9 09:34:44 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  9 09:34:44 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  9 09:34:45 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:45 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:45 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:34:45 compute-0 vigilant_hofstadter[10854]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  9 09:34:45 compute-0 vigilant_hofstadter[10854]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  9 09:34:45 compute-0 vigilant_hofstadter[10854]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  9 09:34:45 compute-0 systemd[1]: libpod-fbc016a43e8b07b268fa9e4a11fa41a6cfee8ab9b64dc2afdd583326392eef06.scope: Deactivated successfully.
Oct  9 09:34:45 compute-0 systemd[1]: libpod-fbc016a43e8b07b268fa9e4a11fa41a6cfee8ab9b64dc2afdd583326392eef06.scope: Consumed 1.437s CPU time.
Oct  9 09:34:45 compute-0 podman[10841]: 2025-10-09 09:34:45.059794232 +0000 UTC m=+4.274192411 container died fbc016a43e8b07b268fa9e4a11fa41a6cfee8ab9b64dc2afdd583326392eef06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c7f47b926eac936ee2157423b48baed83ed8b8fd9041cb994d01a66ee15d019-merged.mount: Deactivated successfully.
Oct  9 09:34:45 compute-0 podman[10841]: 2025-10-09 09:34:45.081685568 +0000 UTC m=+4.296083747 container remove fbc016a43e8b07b268fa9e4a11fa41a6cfee8ab9b64dc2afdd583326392eef06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  9 09:34:45 compute-0 systemd[1]: libpod-conmon-fbc016a43e8b07b268fa9e4a11fa41a6cfee8ab9b64dc2afdd583326392eef06.scope: Deactivated successfully.
Oct  9 09:34:45 compute-0 podman[11927]: 2025-10-09 09:34:45.45018748 +0000 UTC m=+0.026621094 container create 2e92927a9bc94917fcb86ce1d77e6582779578a89815b9d4a703e91ffa5b7092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 09:34:45 compute-0 systemd[1]: Started libpod-conmon-2e92927a9bc94917fcb86ce1d77e6582779578a89815b9d4a703e91ffa5b7092.scope.
Oct  9 09:34:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:45 compute-0 podman[11927]: 2025-10-09 09:34:45.501171059 +0000 UTC m=+0.077604693 container init 2e92927a9bc94917fcb86ce1d77e6582779578a89815b9d4a703e91ffa5b7092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:34:45 compute-0 podman[11927]: 2025-10-09 09:34:45.505390164 +0000 UTC m=+0.081823778 container start 2e92927a9bc94917fcb86ce1d77e6582779578a89815b9d4a703e91ffa5b7092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 09:34:45 compute-0 podman[11927]: 2025-10-09 09:34:45.50638529 +0000 UTC m=+0.082818904 container attach 2e92927a9bc94917fcb86ce1d77e6582779578a89815b9d4a703e91ffa5b7092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:45 compute-0 flamboyant_murdock[11940]: 167 167
Oct  9 09:34:45 compute-0 podman[11927]: 2025-10-09 09:34:45.50843199 +0000 UTC m=+0.084865603 container died 2e92927a9bc94917fcb86ce1d77e6582779578a89815b9d4a703e91ffa5b7092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 09:34:45 compute-0 systemd[1]: libpod-2e92927a9bc94917fcb86ce1d77e6582779578a89815b9d4a703e91ffa5b7092.scope: Deactivated successfully.
Oct  9 09:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-253598fdf263b37b01d47a37b1b76ae67e7c1318137ecac26920b47a2ead630a-merged.mount: Deactivated successfully.
Oct  9 09:34:45 compute-0 podman[11927]: 2025-10-09 09:34:45.52666657 +0000 UTC m=+0.103100184 container remove 2e92927a9bc94917fcb86ce1d77e6582779578a89815b9d4a703e91ffa5b7092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_murdock, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 09:34:45 compute-0 podman[11927]: 2025-10-09 09:34:45.438246837 +0000 UTC m=+0.014680471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:45 compute-0 systemd[1]: libpod-conmon-2e92927a9bc94917fcb86ce1d77e6582779578a89815b9d4a703e91ffa5b7092.scope: Deactivated successfully.
Oct  9 09:34:45 compute-0 podman[11961]: 2025-10-09 09:34:45.635817132 +0000 UTC m=+0.025253324 container create 31be322c32ca053ab7851d7738ebb4cdfbab2837410c92cdf9bb462a5afd633c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:45 compute-0 systemd[1]: Started libpod-conmon-31be322c32ca053ab7851d7738ebb4cdfbab2837410c92cdf9bb462a5afd633c.scope.
Oct  9 09:34:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f202c3c7fe0e98831b54dfa54baf0e5f6c399fcc822b0c89d43534683e6c53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f202c3c7fe0e98831b54dfa54baf0e5f6c399fcc822b0c89d43534683e6c53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f202c3c7fe0e98831b54dfa54baf0e5f6c399fcc822b0c89d43534683e6c53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f202c3c7fe0e98831b54dfa54baf0e5f6c399fcc822b0c89d43534683e6c53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:45 compute-0 podman[11961]: 2025-10-09 09:34:45.693135454 +0000 UTC m=+0.082571667 container init 31be322c32ca053ab7851d7738ebb4cdfbab2837410c92cdf9bb462a5afd633c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:34:45 compute-0 podman[11961]: 2025-10-09 09:34:45.69881398 +0000 UTC m=+0.088250174 container start 31be322c32ca053ab7851d7738ebb4cdfbab2837410c92cdf9bb462a5afd633c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:45 compute-0 podman[11961]: 2025-10-09 09:34:45.69978453 +0000 UTC m=+0.089220733 container attach 31be322c32ca053ab7851d7738ebb4cdfbab2837410c92cdf9bb462a5afd633c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:45 compute-0 podman[11961]: 2025-10-09 09:34:45.625216304 +0000 UTC m=+0.014652507 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:45 compute-0 confident_hertz[11974]: {
Oct  9 09:34:45 compute-0 confident_hertz[11974]:    "1": [
Oct  9 09:34:45 compute-0 confident_hertz[11974]:        {
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "devices": [
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "/dev/loop3"
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            ],
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "lv_name": "ceph_lv0",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "lv_size": "21470642176",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "name": "ceph_lv0",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "tags": {
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.cluster_name": "ceph",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.crush_device_class": "",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.encrypted": "0",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.osd_id": "1",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.type": "block",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.vdo": "0",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:                "ceph.with_tpm": "0"
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            },
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "type": "block",
Oct  9 09:34:45 compute-0 confident_hertz[11974]:            "vg_name": "ceph_vg0"
Oct  9 09:34:45 compute-0 confident_hertz[11974]:        }
Oct  9 09:34:45 compute-0 confident_hertz[11974]:    ]
Oct  9 09:34:45 compute-0 confident_hertz[11974]: }
Oct  9 09:34:45 compute-0 systemd[1]: libpod-31be322c32ca053ab7851d7738ebb4cdfbab2837410c92cdf9bb462a5afd633c.scope: Deactivated successfully.
Oct  9 09:34:45 compute-0 podman[11961]: 2025-10-09 09:34:45.935696388 +0000 UTC m=+0.325132580 container died 31be322c32ca053ab7851d7738ebb4cdfbab2837410c92cdf9bb462a5afd633c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 09:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2f202c3c7fe0e98831b54dfa54baf0e5f6c399fcc822b0c89d43534683e6c53-merged.mount: Deactivated successfully.
Oct  9 09:34:45 compute-0 podman[11961]: 2025-10-09 09:34:45.956338978 +0000 UTC m=+0.345775171 container remove 31be322c32ca053ab7851d7738ebb4cdfbab2837410c92cdf9bb462a5afd633c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:34:45 compute-0 systemd[1]: libpod-conmon-31be322c32ca053ab7851d7738ebb4cdfbab2837410c92cdf9bb462a5afd633c.scope: Deactivated successfully.
Oct  9 09:34:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct  9 09:34:45 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  9 09:34:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:45 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:45 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct  9 09:34:45 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct  9 09:34:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct  9 09:34:46 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  9 09:34:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:34:46 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:34:46 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Oct  9 09:34:46 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Oct  9 09:34:46 compute-0 podman[12074]: 2025-10-09 09:34:46.348491675 +0000 UTC m=+0.024754125 container create 1f24544f4f1a913b1dde5211e54e253650c1303cd06877e638fdb0fd3cd17ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hermann, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 09:34:46 compute-0 systemd[1]: Started libpod-conmon-1f24544f4f1a913b1dde5211e54e253650c1303cd06877e638fdb0fd3cd17ba0.scope.
Oct  9 09:34:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:46 compute-0 podman[12074]: 2025-10-09 09:34:46.396227951 +0000 UTC m=+0.072490420 container init 1f24544f4f1a913b1dde5211e54e253650c1303cd06877e638fdb0fd3cd17ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:34:46 compute-0 podman[12074]: 2025-10-09 09:34:46.401285826 +0000 UTC m=+0.077548276 container start 1f24544f4f1a913b1dde5211e54e253650c1303cd06877e638fdb0fd3cd17ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hermann, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:46 compute-0 podman[12074]: 2025-10-09 09:34:46.402472284 +0000 UTC m=+0.078734754 container attach 1f24544f4f1a913b1dde5211e54e253650c1303cd06877e638fdb0fd3cd17ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hermann, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:46 compute-0 suspicious_hermann[12088]: 167 167
Oct  9 09:34:46 compute-0 systemd[1]: libpod-1f24544f4f1a913b1dde5211e54e253650c1303cd06877e638fdb0fd3cd17ba0.scope: Deactivated successfully.
Oct  9 09:34:46 compute-0 conmon[12088]: conmon 1f24544f4f1a913b1dde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f24544f4f1a913b1dde5211e54e253650c1303cd06877e638fdb0fd3cd17ba0.scope/container/memory.events
Oct  9 09:34:46 compute-0 podman[12074]: 2025-10-09 09:34:46.404281405 +0000 UTC m=+0.080543865 container died 1f24544f4f1a913b1dde5211e54e253650c1303cd06877e638fdb0fd3cd17ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hermann, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 09:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ee62538761754532e22034d3ca66a51188d1efb9c3b9771306587643a41b87a-merged.mount: Deactivated successfully.
Oct  9 09:34:46 compute-0 podman[12074]: 2025-10-09 09:34:46.421106818 +0000 UTC m=+0.097369258 container remove 1f24544f4f1a913b1dde5211e54e253650c1303cd06877e638fdb0fd3cd17ba0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hermann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 09:34:46 compute-0 podman[12074]: 2025-10-09 09:34:46.33824829 +0000 UTC m=+0.014510751 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:46 compute-0 systemd[1]: libpod-conmon-1f24544f4f1a913b1dde5211e54e253650c1303cd06877e638fdb0fd3cd17ba0.scope: Deactivated successfully.
Oct  9 09:34:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:46 compute-0 podman[12116]: 2025-10-09 09:34:46.593575535 +0000 UTC m=+0.027507627 container create 23727cf8448361af54e64daac49185fa2848420d37471303a437bcb66b27f458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:46 compute-0 systemd[1]: Started libpod-conmon-23727cf8448361af54e64daac49185fa2848420d37471303a437bcb66b27f458.scope.
Oct  9 09:34:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06d6a5739f642f250373acf9fa49c7aec925d6e77a4d0a9a164c8addbf536d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06d6a5739f642f250373acf9fa49c7aec925d6e77a4d0a9a164c8addbf536d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06d6a5739f642f250373acf9fa49c7aec925d6e77a4d0a9a164c8addbf536d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06d6a5739f642f250373acf9fa49c7aec925d6e77a4d0a9a164c8addbf536d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06d6a5739f642f250373acf9fa49c7aec925d6e77a4d0a9a164c8addbf536d6/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:46 compute-0 podman[12116]: 2025-10-09 09:34:46.64548004 +0000 UTC m=+0.079412150 container init 23727cf8448361af54e64daac49185fa2848420d37471303a437bcb66b27f458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  9 09:34:46 compute-0 podman[12116]: 2025-10-09 09:34:46.652100433 +0000 UTC m=+0.086032523 container start 23727cf8448361af54e64daac49185fa2848420d37471303a437bcb66b27f458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:34:46 compute-0 podman[12116]: 2025-10-09 09:34:46.653262523 +0000 UTC m=+0.087194614 container attach 23727cf8448361af54e64daac49185fa2848420d37471303a437bcb66b27f458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate-test, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:34:46 compute-0 podman[12116]: 2025-10-09 09:34:46.58236639 +0000 UTC m=+0.016298501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate-test[12129]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct  9 09:34:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate-test[12129]:                            [--no-systemd] [--no-tmpfs]
Oct  9 09:34:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate-test[12129]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  9 09:34:46 compute-0 systemd[1]: libpod-23727cf8448361af54e64daac49185fa2848420d37471303a437bcb66b27f458.scope: Deactivated successfully.
Oct  9 09:34:46 compute-0 podman[12116]: 2025-10-09 09:34:46.804298578 +0000 UTC m=+0.238230689 container died 23727cf8448361af54e64daac49185fa2848420d37471303a437bcb66b27f458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d06d6a5739f642f250373acf9fa49c7aec925d6e77a4d0a9a164c8addbf536d6-merged.mount: Deactivated successfully.
Oct  9 09:34:46 compute-0 podman[12116]: 2025-10-09 09:34:46.824313357 +0000 UTC m=+0.258245448 container remove 23727cf8448361af54e64daac49185fa2848420d37471303a437bcb66b27f458 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate-test, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:34:46 compute-0 systemd[1]: libpod-conmon-23727cf8448361af54e64daac49185fa2848420d37471303a437bcb66b27f458.scope: Deactivated successfully.
Oct  9 09:34:46 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  9 09:34:46 compute-0 ceph-mon[4497]: Deploying daemon osd.1 on compute-0
Oct  9 09:34:46 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  9 09:34:46 compute-0 ceph-mon[4497]: Deploying daemon osd.0 on compute-1
Oct  9 09:34:46 compute-0 systemd[1]: Reloading.
Oct  9 09:34:47 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:47 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:47 compute-0 systemd[1]: Reloading.
Oct  9 09:34:47 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:47 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:47 compute-0 systemd[1]: Starting Ceph osd.1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:34:47 compute-0 podman[12279]: 2025-10-09 09:34:47.524158269 +0000 UTC m=+0.024579225 container create e064e21fd7685f78f7953629b29105c6a45aa71ea8704f6e0d67d64fb97671d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:47 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe63b4bc0ff43f197983e981b2ddfc824f808903134ba806979c8b3126ec6009/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe63b4bc0ff43f197983e981b2ddfc824f808903134ba806979c8b3126ec6009/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe63b4bc0ff43f197983e981b2ddfc824f808903134ba806979c8b3126ec6009/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe63b4bc0ff43f197983e981b2ddfc824f808903134ba806979c8b3126ec6009/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe63b4bc0ff43f197983e981b2ddfc824f808903134ba806979c8b3126ec6009/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:47 compute-0 podman[12279]: 2025-10-09 09:34:47.571018734 +0000 UTC m=+0.071439699 container init e064e21fd7685f78f7953629b29105c6a45aa71ea8704f6e0d67d64fb97671d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Oct  9 09:34:47 compute-0 podman[12279]: 2025-10-09 09:34:47.576927394 +0000 UTC m=+0.077348340 container start e064e21fd7685f78f7953629b29105c6a45aa71ea8704f6e0d67d64fb97671d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:34:47 compute-0 podman[12279]: 2025-10-09 09:34:47.578132968 +0000 UTC m=+0.078553913 container attach e064e21fd7685f78f7953629b29105c6a45aa71ea8704f6e0d67d64fb97671d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:47 compute-0 podman[12279]: 2025-10-09 09:34:47.513988003 +0000 UTC m=+0.014408969 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:47 compute-0 bash[12279]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:47 compute-0 bash[12279]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:48 compute-0 lvm[12375]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:34:48 compute-0 lvm[12375]: VG ceph_vg0 finished
Oct  9 09:34:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct  9 09:34:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:48 compute-0 bash[12279]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct  9 09:34:48 compute-0 bash[12279]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:48 compute-0 bash[12279]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  9 09:34:48 compute-0 bash[12279]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  9 09:34:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  9 09:34:48 compute-0 bash[12279]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  9 09:34:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:48 compute-0 bash[12279]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:48 compute-0 bash[12279]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:34:48 compute-0 bash[12279]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:34:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  9 09:34:48 compute-0 bash[12279]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  9 09:34:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate[12291]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  9 09:34:48 compute-0 bash[12279]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  9 09:34:48 compute-0 systemd[1]: libpod-e064e21fd7685f78f7953629b29105c6a45aa71ea8704f6e0d67d64fb97671d6.scope: Deactivated successfully.
Oct  9 09:34:48 compute-0 podman[12279]: 2025-10-09 09:34:48.492114169 +0000 UTC m=+0.992535115 container died e064e21fd7685f78f7953629b29105c6a45aa71ea8704f6e0d67d64fb97671d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe63b4bc0ff43f197983e981b2ddfc824f808903134ba806979c8b3126ec6009-merged.mount: Deactivated successfully.
Oct  9 09:34:48 compute-0 podman[12279]: 2025-10-09 09:34:48.513677046 +0000 UTC m=+1.014097992 container remove e064e21fd7685f78f7953629b29105c6a45aa71ea8704f6e0d67d64fb97671d6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 09:34:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:48 compute-0 podman[12512]: 2025-10-09 09:34:48.65019622 +0000 UTC m=+0.027584431 container create 5d5fef61306992a706205cfbcd99331c64d740b48f96059bec34b08c86e73d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a143ab2d1a974c6c319085d69ae526311554cc9521319eb970e427c9d63d26c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a143ab2d1a974c6c319085d69ae526311554cc9521319eb970e427c9d63d26c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a143ab2d1a974c6c319085d69ae526311554cc9521319eb970e427c9d63d26c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a143ab2d1a974c6c319085d69ae526311554cc9521319eb970e427c9d63d26c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a143ab2d1a974c6c319085d69ae526311554cc9521319eb970e427c9d63d26c/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:48 compute-0 podman[12512]: 2025-10-09 09:34:48.691552637 +0000 UTC m=+0.068940868 container init 5d5fef61306992a706205cfbcd99331c64d740b48f96059bec34b08c86e73d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:34:48 compute-0 podman[12512]: 2025-10-09 09:34:48.695858184 +0000 UTC m=+0.073246396 container start 5d5fef61306992a706205cfbcd99331c64d740b48f96059bec34b08c86e73d5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:48 compute-0 bash[12512]: 5d5fef61306992a706205cfbcd99331c64d740b48f96059bec34b08c86e73d5f
Oct  9 09:34:48 compute-0 podman[12512]: 2025-10-09 09:34:48.637878866 +0000 UTC m=+0.015267097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:48 compute-0 systemd[1]: Started Ceph osd.1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:34:48 compute-0 ceph-osd[12528]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:34:48 compute-0 ceph-osd[12528]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct  9 09:34:48 compute-0 ceph-osd[12528]: pidfile_write: ignore empty --pid-file
Oct  9 09:34:48 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:48 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:48 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:48 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:48 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:48 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:34:48 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:48 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:48 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:48 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:48 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:48 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 podman[12627]: 2025-10-09 09:34:49.10216579 +0000 UTC m=+0.025561377 container create c331ea46ae8e94de5dca1d63e84c81292b1546858f379c714a50072f813a8a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:49 compute-0 systemd[1]: Started libpod-conmon-c331ea46ae8e94de5dca1d63e84c81292b1546858f379c714a50072f813a8a1f.scope.
Oct  9 09:34:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:49 compute-0 podman[12627]: 2025-10-09 09:34:49.183575205 +0000 UTC m=+0.106970812 container init c331ea46ae8e94de5dca1d63e84c81292b1546858f379c714a50072f813a8a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dhawan, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 09:34:49 compute-0 podman[12627]: 2025-10-09 09:34:49.188150512 +0000 UTC m=+0.111546099 container start c331ea46ae8e94de5dca1d63e84c81292b1546858f379c714a50072f813a8a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dhawan, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:49 compute-0 podman[12627]: 2025-10-09 09:34:49.092006995 +0000 UTC m=+0.015402602 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:49 compute-0 podman[12627]: 2025-10-09 09:34:49.189333613 +0000 UTC m=+0.112729199 container attach c331ea46ae8e94de5dca1d63e84c81292b1546858f379c714a50072f813a8a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:34:49 compute-0 practical_dhawan[12641]: 167 167
Oct  9 09:34:49 compute-0 systemd[1]: libpod-c331ea46ae8e94de5dca1d63e84c81292b1546858f379c714a50072f813a8a1f.scope: Deactivated successfully.
Oct  9 09:34:49 compute-0 conmon[12641]: conmon c331ea46ae8e94de5dca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c331ea46ae8e94de5dca1d63e84c81292b1546858f379c714a50072f813a8a1f.scope/container/memory.events
Oct  9 09:34:49 compute-0 podman[12627]: 2025-10-09 09:34:49.192711151 +0000 UTC m=+0.116106818 container died c331ea46ae8e94de5dca1d63e84c81292b1546858f379c714a50072f813a8a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dhawan, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7384cb77905065114959d766c68642fc6d07d8e767492667e6934e08b5188dc7-merged.mount: Deactivated successfully.
Oct  9 09:34:49 compute-0 podman[12627]: 2025-10-09 09:34:49.212083507 +0000 UTC m=+0.135479093 container remove c331ea46ae8e94de5dca1d63e84c81292b1546858f379c714a50072f813a8a1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dhawan, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 09:34:49 compute-0 systemd[1]: libpod-conmon-c331ea46ae8e94de5dca1d63e84c81292b1546858f379c714a50072f813a8a1f.scope: Deactivated successfully.
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba5a99800 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 podman[12664]: 2025-10-09 09:34:49.335283959 +0000 UTC m=+0.034833950 container create b4ff34bc99eff4dd682b77e3fa429d1306ef9f60bf264373d277885d231c2120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_leavitt, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:49 compute-0 systemd[1]: Started libpod-conmon-b4ff34bc99eff4dd682b77e3fa429d1306ef9f60bf264373d277885d231c2120.scope.
Oct  9 09:34:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/012cea776fc776dd3a8198ed452397682f6acf8a1620dbb7dfefb41bc2775fd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/012cea776fc776dd3a8198ed452397682f6acf8a1620dbb7dfefb41bc2775fd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/012cea776fc776dd3a8198ed452397682f6acf8a1620dbb7dfefb41bc2775fd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/012cea776fc776dd3a8198ed452397682f6acf8a1620dbb7dfefb41bc2775fd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:49 compute-0 podman[12664]: 2025-10-09 09:34:49.395532125 +0000 UTC m=+0.095082106 container init b4ff34bc99eff4dd682b77e3fa429d1306ef9f60bf264373d277885d231c2120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:34:49 compute-0 podman[12664]: 2025-10-09 09:34:49.401554841 +0000 UTC m=+0.101104822 container start b4ff34bc99eff4dd682b77e3fa429d1306ef9f60bf264373d277885d231c2120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_leavitt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct  9 09:34:49 compute-0 podman[12664]: 2025-10-09 09:34:49.402539808 +0000 UTC m=+0.102089788 container attach b4ff34bc99eff4dd682b77e3fa429d1306ef9f60bf264373d277885d231c2120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:49 compute-0 podman[12664]: 2025-10-09 09:34:49.324498603 +0000 UTC m=+0.024048604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:49 compute-0 ceph-osd[12528]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct  9 09:34:49 compute-0 ceph-osd[12528]: load: jerasure load: lrc 
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 ceph-osd[12528]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  9 09:34:49 compute-0 ceph-osd[12528]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:49 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:49 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:49 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:49 compute-0 lvm[12781]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:34:49 compute-0 lvm[12781]: VG ceph_vg0 finished
Oct  9 09:34:49 compute-0 priceless_leavitt[12683]: {}
Oct  9 09:34:49 compute-0 systemd[1]: libpod-b4ff34bc99eff4dd682b77e3fa429d1306ef9f60bf264373d277885d231c2120.scope: Deactivated successfully.
Oct  9 09:34:49 compute-0 conmon[12683]: conmon b4ff34bc99eff4dd682b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4ff34bc99eff4dd682b77e3fa429d1306ef9f60bf264373d277885d231c2120.scope/container/memory.events
Oct  9 09:34:49 compute-0 podman[12664]: 2025-10-09 09:34:49.87174767 +0000 UTC m=+0.571297651 container died b4ff34bc99eff4dd682b77e3fa429d1306ef9f60bf264373d277885d231c2120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_leavitt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-012cea776fc776dd3a8198ed452397682f6acf8a1620dbb7dfefb41bc2775fd4-merged.mount: Deactivated successfully.
Oct  9 09:34:49 compute-0 podman[12664]: 2025-10-09 09:34:49.893760554 +0000 UTC m=+0.593310535 container remove b4ff34bc99eff4dd682b77e3fa429d1306ef9f60bf264373d277885d231c2120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_leavitt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:34:49 compute-0 systemd[1]: libpod-conmon-b4ff34bc99eff4dd682b77e3fa429d1306ef9f60bf264373d277885d231c2120.scope: Deactivated successfully.
Oct  9 09:34:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:34:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bdev(0x563ba6934c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bdev(0x563ba6935000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bdev(0x563ba6935000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bdev(0x563ba6935000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount shared_bdev_used = 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: RocksDB version: 7.9.2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Git sha 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: DB SUMMARY
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: DB Session ID:  EVJZ6G9XYTF20QR2L10C
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: CURRENT file:  CURRENT
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                         Options.error_if_exists: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.create_if_missing: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                                     Options.env: 0x563ba6905e30
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                                Options.info_log: 0x563ba69097a0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                              Options.statistics: (nil)
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.use_fsync: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                              Options.db_log_dir: 
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                                 Options.wal_dir: db.wal
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.write_buffer_manager: 0x563ba69fea00
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.unordered_write: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.row_cache: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                              Options.wal_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.two_write_queues: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.wal_compression: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.atomic_flush: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.max_background_jobs: 4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.max_background_compactions: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.max_subcompactions: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.max_open_files: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Compression algorithms supported:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kZSTD supported: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kXpressCompression supported: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kZlibCompression supported: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2e9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2e9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2e9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2e2bdf6-e597-4078-8e17-beba66271af9
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002490165647, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002490165813, "job": 1, "event": "recovery_finished"}
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: freelist init
Oct  9 09:34:50 compute-0 ceph-osd[12528]: freelist _read_cfg
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs umount
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bdev(0x563ba6935000 /var/lib/ceph/osd/ceph-1/block) close
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bdev(0x563ba6935000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bdev(0x563ba6935000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bdev(0x563ba6935000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluefs mount shared_bdev_used = 4718592
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: RocksDB version: 7.9.2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Git sha 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: DB SUMMARY
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: DB Session ID:  EVJZ6G9XYTF20QR2L10D
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: CURRENT file:  CURRENT
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                         Options.error_if_exists: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.create_if_missing: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                                     Options.env: 0x563ba6a9a380
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                                Options.info_log: 0x563ba6909920
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                              Options.statistics: (nil)
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.use_fsync: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                              Options.db_log_dir: 
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                                 Options.wal_dir: db.wal
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.write_buffer_manager: 0x563ba69fea00
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.unordered_write: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.row_cache: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                              Options.wal_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.two_write_queues: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.wal_compression: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.atomic_flush: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.max_background_jobs: 4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.max_background_compactions: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.max_subcompactions: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.max_open_files: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Compression algorithms supported:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kZSTD supported: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kXpressCompression supported: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kZlibCompression supported: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2e9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2e9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563ba6909ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563ba5b2e9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:34:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:34:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2e2bdf6-e597-4078-8e17-beba66271af9
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002490469762, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002490471272, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002490, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2e2bdf6-e597-4078-8e17-beba66271af9", "db_session_id": "EVJZ6G9XYTF20QR2L10D", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002490472170, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002490, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2e2bdf6-e597-4078-8e17-beba66271af9", "db_session_id": "EVJZ6G9XYTF20QR2L10D", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002490472963, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002490, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2e2bdf6-e597-4078-8e17-beba66271af9", "db_session_id": "EVJZ6G9XYTF20QR2L10D", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002490473527, "job": 1, "event": "recovery_finished"}
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563ba6b06000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: DB pointer 0x563ba6ab0000
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct  9 09:34:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:34:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 460.80 MB usag
Oct  9 09:34:50 compute-0 ceph-osd[12528]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  9 09:34:50 compute-0 ceph-osd[12528]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  9 09:34:50 compute-0 ceph-osd[12528]: _get_class not permitted to load lua
Oct  9 09:34:50 compute-0 ceph-osd[12528]: _get_class not permitted to load sdk
Oct  9 09:34:50 compute-0 ceph-osd[12528]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  9 09:34:50 compute-0 ceph-osd[12528]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  9 09:34:50 compute-0 ceph-osd[12528]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  9 09:34:50 compute-0 ceph-osd[12528]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  9 09:34:50 compute-0 ceph-osd[12528]: osd.1 0 load_pgs
Oct  9 09:34:50 compute-0 ceph-osd[12528]: osd.1 0 load_pgs opened 0 pgs
Oct  9 09:34:50 compute-0 ceph-osd[12528]: osd.1 0 log_to_monitors true
Oct  9 09:34:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1[12524]: 2025-10-09T09:34:50.492+0000 7f3b7e7aa740 -1 osd.1 0 log_to_monitors true
Oct  9 09:34:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Oct  9 09:34:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  9 09:34:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:50 compute-0 podman[13327]: 2025-10-09 09:34:50.618189479 +0000 UTC m=+0.039229036 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:50 compute-0 podman[13327]: 2025-10-09 09:34:50.69740019 +0000 UTC m=+0.118439737 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 09:34:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:34:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:34:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:50 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:51 compute-0 podman[13479]: 2025-10-09 09:34:51.244476422 +0000 UTC m=+0.026252900 container create 5d74728a949a7b96d62e7a2f9f58c7024754eef20964da6204156ff7a4f1f7c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:51 compute-0 systemd[1269]: Starting Mark boot as successful...
Oct  9 09:34:51 compute-0 systemd[1]: Started libpod-conmon-5d74728a949a7b96d62e7a2f9f58c7024754eef20964da6204156ff7a4f1f7c7.scope.
Oct  9 09:34:51 compute-0 systemd[1269]: Finished Mark boot as successful.
Oct  9 09:34:51 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:51 compute-0 podman[13479]: 2025-10-09 09:34:51.275789872 +0000 UTC m=+0.057566371 container init 5d74728a949a7b96d62e7a2f9f58c7024754eef20964da6204156ff7a4f1f7c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 09:34:51 compute-0 podman[13479]: 2025-10-09 09:34:51.280304113 +0000 UTC m=+0.062080592 container start 5d74728a949a7b96d62e7a2f9f58c7024754eef20964da6204156ff7a4f1f7c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:34:51 compute-0 podman[13479]: 2025-10-09 09:34:51.282173208 +0000 UTC m=+0.063949707 container attach 5d74728a949a7b96d62e7a2f9f58c7024754eef20964da6204156ff7a4f1f7c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:51 compute-0 competent_lamport[13494]: 167 167
Oct  9 09:34:51 compute-0 systemd[1]: libpod-5d74728a949a7b96d62e7a2f9f58c7024754eef20964da6204156ff7a4f1f7c7.scope: Deactivated successfully.
Oct  9 09:34:51 compute-0 podman[13479]: 2025-10-09 09:34:51.28399431 +0000 UTC m=+0.065770790 container died 5d74728a949a7b96d62e7a2f9f58c7024754eef20964da6204156ff7a4f1f7c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3093317ab8621ce77eb3a4485d30653931c8098ede268802881079a314f4fc87-merged.mount: Deactivated successfully.
Oct  9 09:34:51 compute-0 podman[13479]: 2025-10-09 09:34:51.301129379 +0000 UTC m=+0.082905858 container remove 5d74728a949a7b96d62e7a2f9f58c7024754eef20964da6204156ff7a4f1f7c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_lamport, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:51 compute-0 podman[13479]: 2025-10-09 09:34:51.234162624 +0000 UTC m=+0.015939103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:51 compute-0 systemd[1]: libpod-conmon-5d74728a949a7b96d62e7a2f9f58c7024754eef20964da6204156ff7a4f1f7c7.scope: Deactivated successfully.
Oct  9 09:34:51 compute-0 podman[13516]: 2025-10-09 09:34:51.417162738 +0000 UTC m=+0.028067812 container create 355f1a337f63a3933a9222f6510cb290cc6f4170aa9a1ce318a100b0d4f425a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_jones, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:51 compute-0 systemd[1]: Started libpod-conmon-355f1a337f63a3933a9222f6510cb290cc6f4170aa9a1ce318a100b0d4f425a5.scope.
Oct  9 09:34:51 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  9 09:34:51 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  9 09:34:51 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 09:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b265ccfa4fbcdfb1f00e1d9ea50f5a9827b3a2807d58aa590f3037f2320cb065/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b265ccfa4fbcdfb1f00e1d9ea50f5a9827b3a2807d58aa590f3037f2320cb065/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b265ccfa4fbcdfb1f00e1d9ea50f5a9827b3a2807d58aa590f3037f2320cb065/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b265ccfa4fbcdfb1f00e1d9ea50f5a9827b3a2807d58aa590f3037f2320cb065/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 09:34:51 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 09:34:51 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 09:34:51 compute-0 podman[13516]: 2025-10-09 09:34:51.467191165 +0000 UTC m=+0.078096240 container init 355f1a337f63a3933a9222f6510cb290cc6f4170aa9a1ce318a100b0d4f425a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_jones, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:34:51 compute-0 podman[13516]: 2025-10-09 09:34:51.472080404 +0000 UTC m=+0.082985478 container start 355f1a337f63a3933a9222f6510cb290cc6f4170aa9a1ce318a100b0d4f425a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 09:34:51 compute-0 podman[13516]: 2025-10-09 09:34:51.473217056 +0000 UTC m=+0.084122132 container attach 355f1a337f63a3933a9222f6510cb290cc6f4170aa9a1ce318a100b0d4f425a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 09:34:51 compute-0 podman[13516]: 2025-10-09 09:34:51.406237638 +0000 UTC m=+0.017142733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:34:51 compute-0 ceph-mgr[4772]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Oct  9 09:34:51 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:34:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:51 compute-0 trusting_jones[13529]: [
Oct  9 09:34:51 compute-0 trusting_jones[13529]:    {
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        "available": false,
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        "being_replaced": false,
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        "ceph_device_lvm": false,
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        "lsm_data": {},
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        "lvs": [],
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        "path": "/dev/sr0",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        "rejected_reasons": [
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "Insufficient space (<5GB)",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "Has a FileSystem"
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        ],
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        "sys_api": {
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "actuators": null,
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "device_nodes": [
Oct  9 09:34:51 compute-0 trusting_jones[13529]:                "sr0"
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            ],
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "devname": "sr0",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "human_readable_size": "474.00 KB",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "id_bus": "ata",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "model": "QEMU DVD-ROM",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "nr_requests": "64",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "parent": "/dev/sr0",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "partitions": {},
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "path": "/dev/sr0",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "removable": "1",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "rev": "2.5+",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "ro": "0",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "rotational": "0",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "sas_address": "",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "sas_device_handle": "",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "scheduler_mode": "mq-deadline",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "sectors": 0,
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "sectorsize": "2048",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "size": 485376.0,
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "support_discard": "2048",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "type": "disk",
Oct  9 09:34:51 compute-0 trusting_jones[13529]:            "vendor": "QEMU"
Oct  9 09:34:51 compute-0 trusting_jones[13529]:        }
Oct  9 09:34:51 compute-0 trusting_jones[13529]:    }
Oct  9 09:34:51 compute-0 trusting_jones[13529]: ]
Oct  9 09:34:51 compute-0 systemd[1]: libpod-355f1a337f63a3933a9222f6510cb290cc6f4170aa9a1ce318a100b0d4f425a5.scope: Deactivated successfully.
Oct  9 09:34:51 compute-0 podman[13516]: 2025-10-09 09:34:51.944559864 +0000 UTC m=+0.555464950 container died 355f1a337f63a3933a9222f6510cb290cc6f4170aa9a1ce318a100b0d4f425a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 09:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b265ccfa4fbcdfb1f00e1d9ea50f5a9827b3a2807d58aa590f3037f2320cb065-merged.mount: Deactivated successfully.
Oct  9 09:34:51 compute-0 podman[13516]: 2025-10-09 09:34:51.963525513 +0000 UTC m=+0.574430589 container remove 355f1a337f63a3933a9222f6510cb290cc6f4170aa9a1ce318a100b0d4f425a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_jones, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 09:34:51 compute-0 systemd[1]: libpod-conmon-355f1a337f63a3933a9222f6510cb290cc6f4170aa9a1ce318a100b0d4f425a5.scope: Deactivated successfully.
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:34:52 compute-0 ceph-mgr[4772]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.5M
Oct  9 09:34:52 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.5M
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  9 09:34:52 compute-0 ceph-mgr[4772]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:34:52 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Oct  9 09:34:52 compute-0 ceph-osd[12528]: osd.1 0 done with init, starting boot process
Oct  9 09:34:52 compute-0 ceph-osd[12528]: osd.1 0 start_boot
Oct  9 09:34:52 compute-0 ceph-osd[12528]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  9 09:34:52 compute-0 ceph-osd[12528]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  9 09:34:52 compute-0 ceph-osd[12528]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  9 09:34:52 compute-0 ceph-osd[12528]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  9 09:34:52 compute-0 ceph-osd[12528]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  9 09:34:52 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 09:34:52 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:34:52 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:34:52 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3144091891; not ready for session (expect reconnect)
Oct  9 09:34:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 09:34:52 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 09:34:52 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  9 09:34:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  9 09:34:53 compute-0 ceph-osd[12528]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 101.548 iops: 25996.309 elapsed_sec: 0.115
Oct  9 09:34:53 compute-0 ceph-osd[12528]: log_channel(cluster) log [WRN] : OSD bench result of 25996.309425 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:34:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1[12524]: 2025-10-09T09:34:53.130+0000 7f3b7a72d640 -1 osd.1 0 waiting for initial osdmap
Oct  9 09:34:53 compute-0 ceph-osd[12528]: osd.1 0 waiting for initial osdmap
Oct  9 09:34:53 compute-0 ceph-osd[12528]: osd.1 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  9 09:34:53 compute-0 ceph-osd[12528]: osd.1 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  9 09:34:53 compute-0 ceph-osd[12528]: osd.1 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  9 09:34:53 compute-0 ceph-osd[12528]: osd.1 7 check_osdmap_features require_osd_release unknown -> squid
Oct  9 09:34:53 compute-0 ceph-osd[12528]: osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  9 09:34:53 compute-0 ceph-osd[12528]: osd.1 7 set_numa_affinity not setting numa affinity
Oct  9 09:34:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-1[12524]: 2025-10-09T09:34:53.139+0000 7f3b75d55640 -1 osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  9 09:34:53 compute-0 ceph-osd[12528]: osd.1 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct  9 09:34:53 compute-0 python3[14511]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct  9 09:34:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 09:34:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct  9 09:34:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Oct  9 09:34:53 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891] boot
Oct  9 09:34:53 compute-0 ceph-osd[12528]: osd.1 8 state: booting -> active
Oct  9 09:34:53 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Oct  9 09:34:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:34:53 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:34:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 09:34:53 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 09:34:53 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 09:34:53 compute-0 ceph-mon[4497]: Adjusting osd_memory_target on compute-1 to  5248M
Oct  9 09:34:53 compute-0 ceph-mon[4497]: Adjusting osd_memory_target on compute-0 to 128.5M
Oct  9 09:34:53 compute-0 ceph-mon[4497]: Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:34:53 compute-0 ceph-mon[4497]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  9 09:34:53 compute-0 ceph-mon[4497]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  9 09:34:53 compute-0 ceph-mon[4497]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  9 09:34:53 compute-0 ceph-mon[4497]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct  9 09:34:53 compute-0 ceph-mon[4497]: osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891] boot
Oct  9 09:34:53 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3679111284; not ready for session (expect reconnect)
Oct  9 09:34:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:34:53 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:34:53 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 09:34:53 compute-0 podman[14513]: 2025-10-09 09:34:53.535669167 +0000 UTC m=+0.065558830 container create 73daf45d942fc1c6b945ae5ebba9d1aeef02bc1d5ac61bcaeb40d545f3bd6ba4 (image=quay.io/ceph/ceph:v19, name=naughty_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 09:34:53 compute-0 systemd[1]: Started libpod-conmon-73daf45d942fc1c6b945ae5ebba9d1aeef02bc1d5ac61bcaeb40d545f3bd6ba4.scope.
Oct  9 09:34:53 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641955f23a292a43b7bc8eeb6c1b45a4455889b3f7cf49e334ea3126a8178d35/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641955f23a292a43b7bc8eeb6c1b45a4455889b3f7cf49e334ea3126a8178d35/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641955f23a292a43b7bc8eeb6c1b45a4455889b3f7cf49e334ea3126a8178d35/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:53 compute-0 podman[14513]: 2025-10-09 09:34:53.49119882 +0000 UTC m=+0.021088483 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:53 compute-0 podman[14513]: 2025-10-09 09:34:53.594165501 +0000 UTC m=+0.124055175 container init 73daf45d942fc1c6b945ae5ebba9d1aeef02bc1d5ac61bcaeb40d545f3bd6ba4 (image=quay.io/ceph/ceph:v19, name=naughty_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:34:53 compute-0 podman[14513]: 2025-10-09 09:34:53.598238882 +0000 UTC m=+0.128128545 container start 73daf45d942fc1c6b945ae5ebba9d1aeef02bc1d5ac61bcaeb40d545f3bd6ba4 (image=quay.io/ceph/ceph:v19, name=naughty_antonelli, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:53 compute-0 podman[14513]: 2025-10-09 09:34:53.59972332 +0000 UTC m=+0.129612983 container attach 73daf45d942fc1c6b945ae5ebba9d1aeef02bc1d5ac61bcaeb40d545f3bd6ba4 (image=quay.io/ceph/ceph:v19, name=naughty_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:53 compute-0 ceph-mgr[4772]: [devicehealth INFO root] creating mgr pool
Oct  9 09:34:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Oct  9 09:34:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:34:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  9 09:34:53 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854922803' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  9 09:34:53 compute-0 naughty_antonelli[14526]: 
Oct  9 09:34:53 compute-0 naughty_antonelli[14526]: {"fsid":"286f8bf0-da72-5823-9a4e-ac4457d9e609","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":72,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":8,"num_osds":2,"num_up_osds":1,"osd_up_since":1760002493,"num_in_osds":2,"osd_in_since":1760002481,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-10-09T09:33:39:705322+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-09T09:33:39.706205+0000","services":{}},"progress_events":{}}
Oct  9 09:34:53 compute-0 systemd[1]: libpod-73daf45d942fc1c6b945ae5ebba9d1aeef02bc1d5ac61bcaeb40d545f3bd6ba4.scope: Deactivated successfully.
Oct  9 09:34:53 compute-0 podman[14553]: 2025-10-09 09:34:53.960574137 +0000 UTC m=+0.017354412 container died 73daf45d942fc1c6b945ae5ebba9d1aeef02bc1d5ac61bcaeb40d545f3bd6ba4 (image=quay.io/ceph/ceph:v19, name=naughty_antonelli, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 09:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-641955f23a292a43b7bc8eeb6c1b45a4455889b3f7cf49e334ea3126a8178d35-merged.mount: Deactivated successfully.
Oct  9 09:34:53 compute-0 podman[14553]: 2025-10-09 09:34:53.980762051 +0000 UTC m=+0.037542306 container remove 73daf45d942fc1c6b945ae5ebba9d1aeef02bc1d5ac61bcaeb40d545f3bd6ba4 (image=quay.io/ceph/ceph:v19, name=naughty_antonelli, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:53 compute-0 systemd[1]: libpod-conmon-73daf45d942fc1c6b945ae5ebba9d1aeef02bc1d5ac61bcaeb40d545f3bd6ba4.scope: Deactivated successfully.
Oct  9 09:34:54 compute-0 python3[14590]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:54 compute-0 podman[14591]: 2025-10-09 09:34:54.401902292 +0000 UTC m=+0.027405783 container create b9cdabf7509332541b20063ede85f15a7485731d304e8628de24d42f21c88638 (image=quay.io/ceph/ceph:v19, name=thirsty_gould, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:54 compute-0 systemd[1]: Started libpod-conmon-b9cdabf7509332541b20063ede85f15a7485731d304e8628de24d42f21c88638.scope.
Oct  9 09:34:54 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:54 compute-0 ceph-mon[4497]: OSD bench result of 25996.309425 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:34:54 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:34:54 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3679111284; not ready for session (expect reconnect)
Oct  9 09:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e938382e66ddfaa005ae8d8fd26067208f93197ebf5e69c2d90569e7d15f02b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e938382e66ddfaa005ae8d8fd26067208f93197ebf5e69c2d90569e7d15f02b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:34:54 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct  9 09:34:54 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  9 09:34:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 09:34:54 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284] boot
Oct  9 09:34:54 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:34:54 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Oct  9 09:34:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:34:54 compute-0 podman[14591]: 2025-10-09 09:34:54.486857774 +0000 UTC m=+0.112361276 container init b9cdabf7509332541b20063ede85f15a7485731d304e8628de24d42f21c88638 (image=quay.io/ceph/ceph:v19, name=thirsty_gould, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 09:34:54 compute-0 podman[14591]: 2025-10-09 09:34:54.391083123 +0000 UTC m=+0.016586624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:54 compute-0 podman[14591]: 2025-10-09 09:34:54.492209815 +0000 UTC m=+0.117713296 container start b9cdabf7509332541b20063ede85f15a7485731d304e8628de24d42f21c88638 (image=quay.io/ceph/ceph:v19, name=thirsty_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:34:54 compute-0 podman[14591]: 2025-10-09 09:34:54.493415429 +0000 UTC m=+0.118918899 container attach b9cdabf7509332541b20063ede85f15a7485731d304e8628de24d42f21c88638 (image=quay.io/ceph/ceph:v19, name=thirsty_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 09:34:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 unknown; 0 B data, 122 MiB used, 20 GiB / 20 GiB avail
Oct  9 09:34:54 compute-0 ceph-osd[12528]: osd.1 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  9 09:34:54 compute-0 ceph-osd[12528]: osd.1 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  9 09:34:54 compute-0 ceph-osd[12528]: osd.1 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  9 09:34:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 09:34:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:34:55 compute-0 ceph-mon[4497]: OSD bench result of 11440.697696 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:34:55 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  9 09:34:55 compute-0 ceph-mon[4497]: osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284] boot
Oct  9 09:34:55 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:34:55 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:34:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct  9 09:34:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:34:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  9 09:34:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:34:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Oct  9 09:34:55 compute-0 thirsty_gould[14603]: pool 'vms' created
Oct  9 09:34:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Oct  9 09:34:55 compute-0 systemd[1]: libpod-b9cdabf7509332541b20063ede85f15a7485731d304e8628de24d42f21c88638.scope: Deactivated successfully.
Oct  9 09:34:55 compute-0 podman[14591]: 2025-10-09 09:34:55.497029271 +0000 UTC m=+1.122532752 container died b9cdabf7509332541b20063ede85f15a7485731d304e8628de24d42f21c88638 (image=quay.io/ceph/ceph:v19, name=thirsty_gould, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [devicehealth INFO root] creating main.db for devicehealth
Oct  9 09:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e938382e66ddfaa005ae8d8fd26067208f93197ebf5e69c2d90569e7d15f02b-merged.mount: Deactivated successfully.
Oct  9 09:34:55 compute-0 podman[14591]: 2025-10-09 09:34:55.517806756 +0000 UTC m=+1.143310247 container remove b9cdabf7509332541b20063ede85f15a7485731d304e8628de24d42f21c88638 (image=quay.io/ceph/ceph:v19, name=thirsty_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Check health
Oct  9 09:34:55 compute-0 systemd[1]: libpod-conmon-b9cdabf7509332541b20063ede85f15a7485731d304e8628de24d42f21c88638.scope: Deactivated successfully.
Oct  9 09:34:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  9 09:34:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  9 09:34:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 09:34:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 09:34:55 compute-0 python3[14679]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:55 compute-0 podman[14680]: 2025-10-09 09:34:55.777748682 +0000 UTC m=+0.026927292 container create f6da3c93a40a295d7dcc281d490d0b630b23738197901d7f2cbea797068dbce4 (image=quay.io/ceph/ceph:v19, name=condescending_ritchie, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:34:55
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [balancer INFO root] Some PGs (0.500000) are unknown; try again later
Oct  9 09:34:55 compute-0 systemd[1]: Started libpod-conmon-f6da3c93a40a295d7dcc281d490d0b630b23738197901d7f2cbea797068dbce4.scope.
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Oct  9 09:34:55 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 09:34:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Oct  9 09:34:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c707379b5f95bdd547c83887c9d0362378b1e673bc630e3e696554ab87bc2cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c707379b5f95bdd547c83887c9d0362378b1e673bc630e3e696554ab87bc2cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:34:55 compute-0 podman[14680]: 2025-10-09 09:34:55.83033869 +0000 UTC m=+0.079517320 container init f6da3c93a40a295d7dcc281d490d0b630b23738197901d7f2cbea797068dbce4 (image=quay.io/ceph/ceph:v19, name=condescending_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:34:55 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:34:55 compute-0 podman[14680]: 2025-10-09 09:34:55.834860586 +0000 UTC m=+0.084039196 container start f6da3c93a40a295d7dcc281d490d0b630b23738197901d7f2cbea797068dbce4 (image=quay.io/ceph/ceph:v19, name=condescending_ritchie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:34:55 compute-0 podman[14680]: 2025-10-09 09:34:55.83603528 +0000 UTC m=+0.085213891 container attach f6da3c93a40a295d7dcc281d490d0b630b23738197901d7f2cbea797068dbce4 (image=quay.io/ceph/ceph:v19, name=condescending_ritchie, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:34:55 compute-0 podman[14680]: 2025-10-09 09:34:55.767162061 +0000 UTC m=+0.016340691 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:34:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 09:34:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:34:56 compute-0 ceph-mon[4497]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:34:56 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  9 09:34:56 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:34:56 compute-0 ceph-mon[4497]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  9 09:34:56 compute-0 ceph-mon[4497]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  9 09:34:56 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:34:56 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:34:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct  9 09:34:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:34:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:34:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Oct  9 09:34:56 compute-0 condescending_ritchie[14692]: pool 'volumes' created
Oct  9 09:34:56 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Oct  9 09:34:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 11 pg[3.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:34:56 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev eb7f056d-2600-4463-b9e6-7421f87d039d (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  9 09:34:56 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev eb7f056d-2600-4463-b9e6-7421f87d039d (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  9 09:34:56 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event eb7f056d-2600-4463-b9e6-7421f87d039d (PG autoscaler increasing pool 2 PGs from 1 to 32) in 0 seconds
Oct  9 09:34:56 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.lwqgfy(active, since 60s)
Oct  9 09:34:56 compute-0 systemd[1]: libpod-f6da3c93a40a295d7dcc281d490d0b630b23738197901d7f2cbea797068dbce4.scope: Deactivated successfully.
Oct  9 09:34:56 compute-0 podman[14680]: 2025-10-09 09:34:56.506269893 +0000 UTC m=+0.755448503 container died f6da3c93a40a295d7dcc281d490d0b630b23738197901d7f2cbea797068dbce4 (image=quay.io/ceph/ceph:v19, name=condescending_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c707379b5f95bdd547c83887c9d0362378b1e673bc630e3e696554ab87bc2cb-merged.mount: Deactivated successfully.
Oct  9 09:34:56 compute-0 podman[14680]: 2025-10-09 09:34:56.526012848 +0000 UTC m=+0.775191458 container remove f6da3c93a40a295d7dcc281d490d0b630b23738197901d7f2cbea797068dbce4 (image=quay.io/ceph/ceph:v19, name=condescending_ritchie, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:56 compute-0 systemd[1]: libpod-conmon-f6da3c93a40a295d7dcc281d490d0b630b23738197901d7f2cbea797068dbce4.scope: Deactivated successfully.
Oct  9 09:34:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v34: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 148 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:34:56 compute-0 python3[14753]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:56 compute-0 podman[14754]: 2025-10-09 09:34:56.779911138 +0000 UTC m=+0.026347789 container create 1f3ab1838cd6d58f278328f69c574e0a50af4e9f0cf10dadf5154b7d2b888fa2 (image=quay.io/ceph/ceph:v19, name=nifty_heyrovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 09:34:56 compute-0 systemd[1]: Started libpod-conmon-1f3ab1838cd6d58f278328f69c574e0a50af4e9f0cf10dadf5154b7d2b888fa2.scope.
Oct  9 09:34:56 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183bb4857a8d92f23afd2a55d6b7d8735729999667e7092dbd01729a7be37a6d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/183bb4857a8d92f23afd2a55d6b7d8735729999667e7092dbd01729a7be37a6d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:56 compute-0 podman[14754]: 2025-10-09 09:34:56.8360513 +0000 UTC m=+0.082487951 container init 1f3ab1838cd6d58f278328f69c574e0a50af4e9f0cf10dadf5154b7d2b888fa2 (image=quay.io/ceph/ceph:v19, name=nifty_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:56 compute-0 podman[14754]: 2025-10-09 09:34:56.840242671 +0000 UTC m=+0.086679313 container start 1f3ab1838cd6d58f278328f69c574e0a50af4e9f0cf10dadf5154b7d2b888fa2 (image=quay.io/ceph/ceph:v19, name=nifty_heyrovsky, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:56 compute-0 podman[14754]: 2025-10-09 09:34:56.841374987 +0000 UTC m=+0.087811628 container attach 1f3ab1838cd6d58f278328f69c574e0a50af4e9f0cf10dadf5154b7d2b888fa2 (image=quay.io/ceph/ceph:v19, name=nifty_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Oct  9 09:34:56 compute-0 podman[14754]: 2025-10-09 09:34:56.769302867 +0000 UTC m=+0.015739518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 09:34:57 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:34:57 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:34:57 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:34:57 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:34:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct  9 09:34:57 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:34:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Oct  9 09:34:57 compute-0 nifty_heyrovsky[14767]: pool 'backups' created
Oct  9 09:34:57 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Oct  9 09:34:57 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 12 pg[4.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:34:57 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 12 pg[3.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:34:57 compute-0 systemd[1]: libpod-1f3ab1838cd6d58f278328f69c574e0a50af4e9f0cf10dadf5154b7d2b888fa2.scope: Deactivated successfully.
Oct  9 09:34:57 compute-0 podman[14754]: 2025-10-09 09:34:57.510305942 +0000 UTC m=+0.756742593 container died 1f3ab1838cd6d58f278328f69c574e0a50af4e9f0cf10dadf5154b7d2b888fa2 (image=quay.io/ceph/ceph:v19, name=nifty_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-183bb4857a8d92f23afd2a55d6b7d8735729999667e7092dbd01729a7be37a6d-merged.mount: Deactivated successfully.
Oct  9 09:34:57 compute-0 podman[14754]: 2025-10-09 09:34:57.532451267 +0000 UTC m=+0.778887908 container remove 1f3ab1838cd6d58f278328f69c574e0a50af4e9f0cf10dadf5154b7d2b888fa2 (image=quay.io/ceph/ceph:v19, name=nifty_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:34:57 compute-0 systemd[1]: libpod-conmon-1f3ab1838cd6d58f278328f69c574e0a50af4e9f0cf10dadf5154b7d2b888fa2.scope: Deactivated successfully.
Oct  9 09:34:57 compute-0 python3[14829]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:57 compute-0 podman[14830]: 2025-10-09 09:34:57.822384089 +0000 UTC m=+0.031467732 container create 6ddda7a966e7ded2fba901c5d5e8cf70be82b31adb45262fd0d912b8399a827e (image=quay.io/ceph/ceph:v19, name=eager_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:34:57 compute-0 systemd[1]: Started libpod-conmon-6ddda7a966e7ded2fba901c5d5e8cf70be82b31adb45262fd0d912b8399a827e.scope.
Oct  9 09:34:57 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef34499bb6bc6c6066697528db267b5371e669743240d04914b383171d6b2cda/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef34499bb6bc6c6066697528db267b5371e669743240d04914b383171d6b2cda/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:57 compute-0 podman[14830]: 2025-10-09 09:34:57.863275479 +0000 UTC m=+0.072359133 container init 6ddda7a966e7ded2fba901c5d5e8cf70be82b31adb45262fd0d912b8399a827e (image=quay.io/ceph/ceph:v19, name=eager_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:57 compute-0 podman[14830]: 2025-10-09 09:34:57.867822052 +0000 UTC m=+0.076905696 container start 6ddda7a966e7ded2fba901c5d5e8cf70be82b31adb45262fd0d912b8399a827e (image=quay.io/ceph/ceph:v19, name=eager_dhawan, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:57 compute-0 podman[14830]: 2025-10-09 09:34:57.868809172 +0000 UTC m=+0.077892817 container attach 6ddda7a966e7ded2fba901c5d5e8cf70be82b31adb45262fd0d912b8399a827e (image=quay.io/ceph/ceph:v19, name=eager_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 09:34:57 compute-0 podman[14830]: 2025-10-09 09:34:57.812379395 +0000 UTC m=+0.021463059 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 09:34:58 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:34:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct  9 09:34:58 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:34:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Oct  9 09:34:58 compute-0 eager_dhawan[14842]: pool 'images' created
Oct  9 09:34:58 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Oct  9 09:34:58 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:34:58 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:34:58 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 13 pg[5.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:34:58 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 13 pg[4.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:34:58 compute-0 systemd[1]: libpod-6ddda7a966e7ded2fba901c5d5e8cf70be82b31adb45262fd0d912b8399a827e.scope: Deactivated successfully.
Oct  9 09:34:58 compute-0 conmon[14842]: conmon 6ddda7a966e7ded2fba9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ddda7a966e7ded2fba901c5d5e8cf70be82b31adb45262fd0d912b8399a827e.scope/container/memory.events
Oct  9 09:34:58 compute-0 podman[14830]: 2025-10-09 09:34:58.515261745 +0000 UTC m=+0.724345400 container died 6ddda7a966e7ded2fba901c5d5e8cf70be82b31adb45262fd0d912b8399a827e (image=quay.io/ceph/ceph:v19, name=eager_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 09:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef34499bb6bc6c6066697528db267b5371e669743240d04914b383171d6b2cda-merged.mount: Deactivated successfully.
Oct  9 09:34:58 compute-0 podman[14830]: 2025-10-09 09:34:58.536695017 +0000 UTC m=+0.745778661 container remove 6ddda7a966e7ded2fba901c5d5e8cf70be82b31adb45262fd0d912b8399a827e (image=quay.io/ceph/ceph:v19, name=eager_dhawan, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 09:34:58 compute-0 systemd[1]: libpod-conmon-6ddda7a966e7ded2fba901c5d5e8cf70be82b31adb45262fd0d912b8399a827e.scope: Deactivated successfully.
Oct  9 09:34:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v37: 5 pgs: 1 active+clean, 2 unknown, 2 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:34:58 compute-0 python3[14904]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:58 compute-0 podman[14905]: 2025-10-09 09:34:58.789404636 +0000 UTC m=+0.026631024 container create f049f975cb41ff3eee0d4f38da04d180350f8d8d3d86f355faefe63e91ae0a18 (image=quay.io/ceph/ceph:v19, name=thirsty_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 09:34:58 compute-0 systemd[1]: Started libpod-conmon-f049f975cb41ff3eee0d4f38da04d180350f8d8d3d86f355faefe63e91ae0a18.scope.
Oct  9 09:34:58 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be00ed050cc58deb8c4d276b6b5932dd53e8852ed2e5c5f393f9ae3c02d32ec1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be00ed050cc58deb8c4d276b6b5932dd53e8852ed2e5c5f393f9ae3c02d32ec1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:58 compute-0 podman[14905]: 2025-10-09 09:34:58.84275639 +0000 UTC m=+0.079982778 container init f049f975cb41ff3eee0d4f38da04d180350f8d8d3d86f355faefe63e91ae0a18 (image=quay.io/ceph/ceph:v19, name=thirsty_zhukovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 09:34:58 compute-0 podman[14905]: 2025-10-09 09:34:58.847164151 +0000 UTC m=+0.084390539 container start f049f975cb41ff3eee0d4f38da04d180350f8d8d3d86f355faefe63e91ae0a18 (image=quay.io/ceph/ceph:v19, name=thirsty_zhukovsky, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 09:34:58 compute-0 podman[14905]: 2025-10-09 09:34:58.8492591 +0000 UTC m=+0.086485488 container attach f049f975cb41ff3eee0d4f38da04d180350f8d8d3d86f355faefe63e91ae0a18 (image=quay.io/ceph/ceph:v19, name=thirsty_zhukovsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:58 compute-0 podman[14905]: 2025-10-09 09:34:58.778928523 +0000 UTC m=+0.016154931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:34:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 09:34:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:34:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct  9 09:34:59 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:34:59 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:34:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:34:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Oct  9 09:34:59 compute-0 thirsty_zhukovsky[14917]: pool 'cephfs.cephfs.meta' created
Oct  9 09:34:59 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Oct  9 09:34:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 14 pg[6.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:34:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 14 pg[5.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:34:59 compute-0 systemd[1]: libpod-f049f975cb41ff3eee0d4f38da04d180350f8d8d3d86f355faefe63e91ae0a18.scope: Deactivated successfully.
Oct  9 09:34:59 compute-0 conmon[14917]: conmon f049f975cb41ff3eee0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f049f975cb41ff3eee0d4f38da04d180350f8d8d3d86f355faefe63e91ae0a18.scope/container/memory.events
Oct  9 09:34:59 compute-0 podman[14905]: 2025-10-09 09:34:59.525644391 +0000 UTC m=+0.762870789 container died f049f975cb41ff3eee0d4f38da04d180350f8d8d3d86f355faefe63e91ae0a18 (image=quay.io/ceph/ceph:v19, name=thirsty_zhukovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-be00ed050cc58deb8c4d276b6b5932dd53e8852ed2e5c5f393f9ae3c02d32ec1-merged.mount: Deactivated successfully.
Oct  9 09:34:59 compute-0 podman[14905]: 2025-10-09 09:34:59.541930017 +0000 UTC m=+0.779156405 container remove f049f975cb41ff3eee0d4f38da04d180350f8d8d3d86f355faefe63e91ae0a18 (image=quay.io/ceph/ceph:v19, name=thirsty_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:59 compute-0 systemd[1]: libpod-conmon-f049f975cb41ff3eee0d4f38da04d180350f8d8d3d86f355faefe63e91ae0a18.scope: Deactivated successfully.
Oct  9 09:34:59 compute-0 python3[14978]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:34:59 compute-0 podman[14979]: 2025-10-09 09:34:59.787845807 +0000 UTC m=+0.027770593 container create 40d28d097e26c22e1c74f655716f8f0ff25b283225f495e83ea2f2cd59424f6d (image=quay.io/ceph/ceph:v19, name=silly_shaw, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 09:34:59 compute-0 systemd[1]: Started libpod-conmon-40d28d097e26c22e1c74f655716f8f0ff25b283225f495e83ea2f2cd59424f6d.scope.
Oct  9 09:34:59 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:34:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8423f5681df5dc49f4b395eb15df71a24a2db3d8688581a8388e1ef054d158/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8423f5681df5dc49f4b395eb15df71a24a2db3d8688581a8388e1ef054d158/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:59 compute-0 podman[14979]: 2025-10-09 09:34:59.836477991 +0000 UTC m=+0.076402787 container init 40d28d097e26c22e1c74f655716f8f0ff25b283225f495e83ea2f2cd59424f6d (image=quay.io/ceph/ceph:v19, name=silly_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 09:34:59 compute-0 podman[14979]: 2025-10-09 09:34:59.840229474 +0000 UTC m=+0.080154250 container start 40d28d097e26c22e1c74f655716f8f0ff25b283225f495e83ea2f2cd59424f6d (image=quay.io/ceph/ceph:v19, name=silly_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  9 09:34:59 compute-0 podman[14979]: 2025-10-09 09:34:59.842707727 +0000 UTC m=+0.082632523 container attach 40d28d097e26c22e1c74f655716f8f0ff25b283225f495e83ea2f2cd59424f6d (image=quay.io/ceph/ceph:v19, name=silly_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 09:34:59 compute-0 podman[14979]: 2025-10-09 09:34:59.775631507 +0000 UTC m=+0.015556302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct  9 09:35:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct  9 09:35:00 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:00 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Oct  9 09:35:00 compute-0 silly_shaw[14992]: pool 'cephfs.cephfs.data' created
Oct  9 09:35:00 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Oct  9 09:35:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 15 pg[6.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:00 compute-0 systemd[1]: libpod-40d28d097e26c22e1c74f655716f8f0ff25b283225f495e83ea2f2cd59424f6d.scope: Deactivated successfully.
Oct  9 09:35:00 compute-0 podman[14979]: 2025-10-09 09:35:00.530561991 +0000 UTC m=+0.770486777 container died 40d28d097e26c22e1c74f655716f8f0ff25b283225f495e83ea2f2cd59424f6d (image=quay.io/ceph/ceph:v19, name=silly_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 09:35:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce8423f5681df5dc49f4b395eb15df71a24a2db3d8688581a8388e1ef054d158-merged.mount: Deactivated successfully.
Oct  9 09:35:00 compute-0 podman[14979]: 2025-10-09 09:35:00.549863073 +0000 UTC m=+0.789787850 container remove 40d28d097e26c22e1c74f655716f8f0ff25b283225f495e83ea2f2cd59424f6d (image=quay.io/ceph/ceph:v19, name=silly_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 09:35:00 compute-0 systemd[1]: libpod-conmon-40d28d097e26c22e1c74f655716f8f0ff25b283225f495e83ea2f2cd59424f6d.scope: Deactivated successfully.
Oct  9 09:35:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v40: 7 pgs: 3 active+clean, 3 unknown, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 09:35:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:35:00 compute-0 python3[15055]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:00 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 3 completed events
Oct  9 09:35:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:35:00 compute-0 podman[15056]: 2025-10-09 09:35:00.829714989 +0000 UTC m=+0.026956137 container create 8568dde5ac9b3258d57c4014e98db683d2567ee50d691dab9743a8f489cc637c (image=quay.io/ceph/ceph:v19, name=sweet_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:35:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:00 compute-0 systemd[1]: Started libpod-conmon-8568dde5ac9b3258d57c4014e98db683d2567ee50d691dab9743a8f489cc637c.scope.
Oct  9 09:35:00 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c240a3313f92ab79a7d5af9dcf4645f16cc6052519cef8e6fac6f5c6853a017d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c240a3313f92ab79a7d5af9dcf4645f16cc6052519cef8e6fac6f5c6853a017d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:00 compute-0 podman[15056]: 2025-10-09 09:35:00.88254569 +0000 UTC m=+0.079786858 container init 8568dde5ac9b3258d57c4014e98db683d2567ee50d691dab9743a8f489cc637c (image=quay.io/ceph/ceph:v19, name=sweet_lewin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:00 compute-0 podman[15056]: 2025-10-09 09:35:00.886569737 +0000 UTC m=+0.083810885 container start 8568dde5ac9b3258d57c4014e98db683d2567ee50d691dab9743a8f489cc637c (image=quay.io/ceph/ceph:v19, name=sweet_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:35:00 compute-0 podman[15056]: 2025-10-09 09:35:00.888781327 +0000 UTC m=+0.086022495 container attach 8568dde5ac9b3258d57c4014e98db683d2567ee50d691dab9743a8f489cc637c (image=quay.io/ceph/ceph:v19, name=sweet_lewin, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 09:35:00 compute-0 podman[15056]: 2025-10-09 09:35:00.818278275 +0000 UTC m=+0.015519443 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:01 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Oct  9 09:35:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  9 09:35:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct  9 09:35:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:35:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  9 09:35:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Oct  9 09:35:01 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Oct  9 09:35:01 compute-0 sweet_lewin[15068]: enabled application 'rbd' on pool 'vms'
Oct  9 09:35:01 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:01 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:35:01 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:01 compute-0 ceph-mon[4497]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:01 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  9 09:35:01 compute-0 systemd[1]: libpod-8568dde5ac9b3258d57c4014e98db683d2567ee50d691dab9743a8f489cc637c.scope: Deactivated successfully.
Oct  9 09:35:01 compute-0 podman[15056]: 2025-10-09 09:35:01.534065967 +0000 UTC m=+0.731307115 container died 8568dde5ac9b3258d57c4014e98db683d2567ee50d691dab9743a8f489cc637c (image=quay.io/ceph/ceph:v19, name=sweet_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Oct  9 09:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c240a3313f92ab79a7d5af9dcf4645f16cc6052519cef8e6fac6f5c6853a017d-merged.mount: Deactivated successfully.
Oct  9 09:35:01 compute-0 podman[15056]: 2025-10-09 09:35:01.551899482 +0000 UTC m=+0.749140630 container remove 8568dde5ac9b3258d57c4014e98db683d2567ee50d691dab9743a8f489cc637c (image=quay.io/ceph/ceph:v19, name=sweet_lewin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Oct  9 09:35:01 compute-0 systemd[1]: libpod-conmon-8568dde5ac9b3258d57c4014e98db683d2567ee50d691dab9743a8f489cc637c.scope: Deactivated successfully.
Oct  9 09:35:01 compute-0 python3[15128]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:01 compute-0 podman[15129]: 2025-10-09 09:35:01.798840304 +0000 UTC m=+0.026684574 container create c680fc6e76d2a61ae70a7cf8f890400d9d520b2a7694711739ef0a46c7e07872 (image=quay.io/ceph/ceph:v19, name=gallant_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:35:01 compute-0 systemd[1]: Started libpod-conmon-c680fc6e76d2a61ae70a7cf8f890400d9d520b2a7694711739ef0a46c7e07872.scope.
Oct  9 09:35:01 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/282cfbd10c349f85399cf943ad4044e55d419b4f915c28b89eacaf333893fe77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/282cfbd10c349f85399cf943ad4044e55d419b4f915c28b89eacaf333893fe77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:01 compute-0 podman[15129]: 2025-10-09 09:35:01.846052431 +0000 UTC m=+0.073896701 container init c680fc6e76d2a61ae70a7cf8f890400d9d520b2a7694711739ef0a46c7e07872 (image=quay.io/ceph/ceph:v19, name=gallant_cerf, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:01 compute-0 podman[15129]: 2025-10-09 09:35:01.849768078 +0000 UTC m=+0.077612337 container start c680fc6e76d2a61ae70a7cf8f890400d9d520b2a7694711739ef0a46c7e07872 (image=quay.io/ceph/ceph:v19, name=gallant_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:35:01 compute-0 podman[15129]: 2025-10-09 09:35:01.850837093 +0000 UTC m=+0.078681363 container attach c680fc6e76d2a61ae70a7cf8f890400d9d520b2a7694711739ef0a46c7e07872 (image=quay.io/ceph/ceph:v19, name=gallant_cerf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct  9 09:35:01 compute-0 podman[15129]: 2025-10-09 09:35:01.78685187 +0000 UTC m=+0.014696150 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Oct  9 09:35:02 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  9 09:35:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct  9 09:35:02 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:35:02 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  9 09:35:02 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  9 09:35:02 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  9 09:35:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Oct  9 09:35:02 compute-0 gallant_cerf[15142]: enabled application 'rbd' on pool 'volumes'
Oct  9 09:35:02 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Oct  9 09:35:02 compute-0 systemd[1]: libpod-c680fc6e76d2a61ae70a7cf8f890400d9d520b2a7694711739ef0a46c7e07872.scope: Deactivated successfully.
Oct  9 09:35:02 compute-0 podman[15129]: 2025-10-09 09:35:02.542274103 +0000 UTC m=+0.770118383 container died c680fc6e76d2a61ae70a7cf8f890400d9d520b2a7694711739ef0a46c7e07872 (image=quay.io/ceph/ceph:v19, name=gallant_cerf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 09:35:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-282cfbd10c349f85399cf943ad4044e55d419b4f915c28b89eacaf333893fe77-merged.mount: Deactivated successfully.
Oct  9 09:35:02 compute-0 podman[15129]: 2025-10-09 09:35:02.561583561 +0000 UTC m=+0.789427831 container remove c680fc6e76d2a61ae70a7cf8f890400d9d520b2a7694711739ef0a46c7e07872 (image=quay.io/ceph/ceph:v19, name=gallant_cerf, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:02 compute-0 systemd[1]: libpod-conmon-c680fc6e76d2a61ae70a7cf8f890400d9d520b2a7694711739ef0a46c7e07872.scope: Deactivated successfully.
Oct  9 09:35:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v43: 38 pgs: 3 active+clean, 34 unknown, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:02 compute-0 python3[15202]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:02 compute-0 podman[15203]: 2025-10-09 09:35:02.813831508 +0000 UTC m=+0.028110082 container create 36112eb73d92fadb4d850a7a62b2f93a1ea72c1ac722e6202f6a2000c2bf6437 (image=quay.io/ceph/ceph:v19, name=quirky_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 09:35:02 compute-0 systemd[1]: Started libpod-conmon-36112eb73d92fadb4d850a7a62b2f93a1ea72c1ac722e6202f6a2000c2bf6437.scope.
Oct  9 09:35:02 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f428024667017ad6f111025fad7318eb99d739b245a5ea65024ca6670ed3a4d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f428024667017ad6f111025fad7318eb99d739b245a5ea65024ca6670ed3a4d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:02 compute-0 podman[15203]: 2025-10-09 09:35:02.86698064 +0000 UTC m=+0.081259223 container init 36112eb73d92fadb4d850a7a62b2f93a1ea72c1ac722e6202f6a2000c2bf6437 (image=quay.io/ceph/ceph:v19, name=quirky_mahavira, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  9 09:35:02 compute-0 podman[15203]: 2025-10-09 09:35:02.871203042 +0000 UTC m=+0.085481615 container start 36112eb73d92fadb4d850a7a62b2f93a1ea72c1ac722e6202f6a2000c2bf6437 (image=quay.io/ceph/ceph:v19, name=quirky_mahavira, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:02 compute-0 podman[15203]: 2025-10-09 09:35:02.872340667 +0000 UTC m=+0.086619240 container attach 36112eb73d92fadb4d850a7a62b2f93a1ea72c1ac722e6202f6a2000c2bf6437 (image=quay.io/ceph/ceph:v19, name=quirky_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:02 compute-0 podman[15203]: 2025-10-09 09:35:02.80266789 +0000 UTC m=+0.016946483 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Oct  9 09:35:03 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  9 09:35:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct  9 09:35:03 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  9 09:35:03 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  9 09:35:03 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  9 09:35:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Oct  9 09:35:03 compute-0 quirky_mahavira[15215]: enabled application 'rbd' on pool 'backups'
Oct  9 09:35:03 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Oct  9 09:35:03 compute-0 systemd[1]: libpod-36112eb73d92fadb4d850a7a62b2f93a1ea72c1ac722e6202f6a2000c2bf6437.scope: Deactivated successfully.
Oct  9 09:35:03 compute-0 podman[15240]: 2025-10-09 09:35:03.57557476 +0000 UTC m=+0.017092459 container died 36112eb73d92fadb4d850a7a62b2f93a1ea72c1ac722e6202f6a2000c2bf6437 (image=quay.io/ceph/ceph:v19, name=quirky_mahavira, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f428024667017ad6f111025fad7318eb99d739b245a5ea65024ca6670ed3a4d-merged.mount: Deactivated successfully.
Oct  9 09:35:03 compute-0 podman[15240]: 2025-10-09 09:35:03.59083331 +0000 UTC m=+0.032350989 container remove 36112eb73d92fadb4d850a7a62b2f93a1ea72c1ac722e6202f6a2000c2bf6437 (image=quay.io/ceph/ceph:v19, name=quirky_mahavira, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 09:35:03 compute-0 systemd[1]: libpod-conmon-36112eb73d92fadb4d850a7a62b2f93a1ea72c1ac722e6202f6a2000c2bf6437.scope: Deactivated successfully.
Oct  9 09:35:03 compute-0 python3[15276]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:03 compute-0 podman[15277]: 2025-10-09 09:35:03.850024198 +0000 UTC m=+0.035045658 container create b863bf5013a602187cba8b28a5b020e166acdbfed252d2d6263ed82243550e60 (image=quay.io/ceph/ceph:v19, name=friendly_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:35:03 compute-0 systemd[1]: Started libpod-conmon-b863bf5013a602187cba8b28a5b020e166acdbfed252d2d6263ed82243550e60.scope.
Oct  9 09:35:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4204b0d6c6cd08c238d94d2e5686bcca0acd777b31af5143e3c5ab178de84211/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4204b0d6c6cd08c238d94d2e5686bcca0acd777b31af5143e3c5ab178de84211/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:03 compute-0 podman[15277]: 2025-10-09 09:35:03.907154607 +0000 UTC m=+0.092176077 container init b863bf5013a602187cba8b28a5b020e166acdbfed252d2d6263ed82243550e60 (image=quay.io/ceph/ceph:v19, name=friendly_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:03 compute-0 podman[15277]: 2025-10-09 09:35:03.911325511 +0000 UTC m=+0.096346972 container start b863bf5013a602187cba8b28a5b020e166acdbfed252d2d6263ed82243550e60 (image=quay.io/ceph/ceph:v19, name=friendly_kowalevski, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:03 compute-0 podman[15277]: 2025-10-09 09:35:03.831359907 +0000 UTC m=+0.016381387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:03 compute-0 podman[15277]: 2025-10-09 09:35:03.932828915 +0000 UTC m=+0.117850375 container attach b863bf5013a602187cba8b28a5b020e166acdbfed252d2d6263ed82243550e60 (image=quay.io/ceph/ceph:v19, name=friendly_kowalevski, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:35:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Oct  9 09:35:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  9 09:35:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct  9 09:35:04 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  9 09:35:04 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  9 09:35:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  9 09:35:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Oct  9 09:35:04 compute-0 friendly_kowalevski[15289]: enabled application 'rbd' on pool 'images'
Oct  9 09:35:04 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Oct  9 09:35:04 compute-0 systemd[1]: libpod-b863bf5013a602187cba8b28a5b020e166acdbfed252d2d6263ed82243550e60.scope: Deactivated successfully.
Oct  9 09:35:04 compute-0 podman[15277]: 2025-10-09 09:35:04.557593916 +0000 UTC m=+0.742615376 container died b863bf5013a602187cba8b28a5b020e166acdbfed252d2d6263ed82243550e60 (image=quay.io/ceph/ceph:v19, name=friendly_kowalevski, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 09:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4204b0d6c6cd08c238d94d2e5686bcca0acd777b31af5143e3c5ab178de84211-merged.mount: Deactivated successfully.
Oct  9 09:35:04 compute-0 podman[15277]: 2025-10-09 09:35:04.575760078 +0000 UTC m=+0.760781537 container remove b863bf5013a602187cba8b28a5b020e166acdbfed252d2d6263ed82243550e60 (image=quay.io/ceph/ceph:v19, name=friendly_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v46: 38 pgs: 6 active+clean, 32 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:04 compute-0 systemd[1]: libpod-conmon-b863bf5013a602187cba8b28a5b020e166acdbfed252d2d6263ed82243550e60.scope: Deactivated successfully.
Oct  9 09:35:04 compute-0 python3[15350]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:04 compute-0 podman[15351]: 2025-10-09 09:35:04.829751855 +0000 UTC m=+0.027472881 container create bd646ec3887686b690fd64f491e02fcbf6054ddb3e0c3fe21c4361c3d1266a08 (image=quay.io/ceph/ceph:v19, name=naughty_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:04 compute-0 systemd[1]: Started libpod-conmon-bd646ec3887686b690fd64f491e02fcbf6054ddb3e0c3fe21c4361c3d1266a08.scope.
Oct  9 09:35:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b7b9a17afb6586884b300d62b00947326e79db7c51987a4b2d28db5dcd8510/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b7b9a17afb6586884b300d62b00947326e79db7c51987a4b2d28db5dcd8510/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:04 compute-0 podman[15351]: 2025-10-09 09:35:04.881086354 +0000 UTC m=+0.078807402 container init bd646ec3887686b690fd64f491e02fcbf6054ddb3e0c3fe21c4361c3d1266a08 (image=quay.io/ceph/ceph:v19, name=naughty_villani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Oct  9 09:35:04 compute-0 podman[15351]: 2025-10-09 09:35:04.885732725 +0000 UTC m=+0.083453752 container start bd646ec3887686b690fd64f491e02fcbf6054ddb3e0c3fe21c4361c3d1266a08 (image=quay.io/ceph/ceph:v19, name=naughty_villani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:04 compute-0 podman[15351]: 2025-10-09 09:35:04.887583064 +0000 UTC m=+0.085304091 container attach bd646ec3887686b690fd64f491e02fcbf6054ddb3e0c3fe21c4361c3d1266a08 (image=quay.io/ceph/ceph:v19, name=naughty_villani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 09:35:04 compute-0 podman[15351]: 2025-10-09 09:35:04.817926518 +0000 UTC m=+0.015647565 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Oct  9 09:35:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  9 09:35:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct  9 09:35:05 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  9 09:35:05 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  9 09:35:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  9 09:35:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Oct  9 09:35:05 compute-0 naughty_villani[15362]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct  9 09:35:05 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Oct  9 09:35:05 compute-0 systemd[1]: libpod-bd646ec3887686b690fd64f491e02fcbf6054ddb3e0c3fe21c4361c3d1266a08.scope: Deactivated successfully.
Oct  9 09:35:05 compute-0 podman[15351]: 2025-10-09 09:35:05.561328456 +0000 UTC m=+0.759049504 container died bd646ec3887686b690fd64f491e02fcbf6054ddb3e0c3fe21c4361c3d1266a08 (image=quay.io/ceph/ceph:v19, name=naughty_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-66b7b9a17afb6586884b300d62b00947326e79db7c51987a4b2d28db5dcd8510-merged.mount: Deactivated successfully.
Oct  9 09:35:05 compute-0 podman[15351]: 2025-10-09 09:35:05.577714702 +0000 UTC m=+0.775435729 container remove bd646ec3887686b690fd64f491e02fcbf6054ddb3e0c3fe21c4361c3d1266a08 (image=quay.io/ceph/ceph:v19, name=naughty_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:05 compute-0 systemd[1]: libpod-conmon-bd646ec3887686b690fd64f491e02fcbf6054ddb3e0c3fe21c4361c3d1266a08.scope: Deactivated successfully.
Oct  9 09:35:05 compute-0 python3[15423]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:05 compute-0 podman[15424]: 2025-10-09 09:35:05.820406893 +0000 UTC m=+0.026710553 container create c18a46ec618c9fe2451099c29161cbacd0659b7c001186920651c9a2816c6036 (image=quay.io/ceph/ceph:v19, name=wonderful_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:35:05 compute-0 systemd[1]: Started libpod-conmon-c18a46ec618c9fe2451099c29161cbacd0659b7c001186920651c9a2816c6036.scope.
Oct  9 09:35:05 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15aa94badeef65042f5ab408363ea890afd6a14b33fff3a45a64bcb62e70dee0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15aa94badeef65042f5ab408363ea890afd6a14b33fff3a45a64bcb62e70dee0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:05 compute-0 podman[15424]: 2025-10-09 09:35:05.867508432 +0000 UTC m=+0.073812113 container init c18a46ec618c9fe2451099c29161cbacd0659b7c001186920651c9a2816c6036 (image=quay.io/ceph/ceph:v19, name=wonderful_leakey, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 09:35:05 compute-0 podman[15424]: 2025-10-09 09:35:05.87282734 +0000 UTC m=+0.079131001 container start c18a46ec618c9fe2451099c29161cbacd0659b7c001186920651c9a2816c6036 (image=quay.io/ceph/ceph:v19, name=wonderful_leakey, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:05 compute-0 podman[15424]: 2025-10-09 09:35:05.873827016 +0000 UTC m=+0.080130676 container attach c18a46ec618c9fe2451099c29161cbacd0659b7c001186920651c9a2816c6036 (image=quay.io/ceph/ceph:v19, name=wonderful_leakey, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:35:05 compute-0 podman[15424]: 2025-10-09 09:35:05.809122407 +0000 UTC m=+0.015426067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:06 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Oct  9 09:35:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  9 09:35:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct  9 09:35:06 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  9 09:35:06 compute-0 ceph-mon[4497]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:06 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  9 09:35:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  9 09:35:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Oct  9 09:35:06 compute-0 wonderful_leakey[15436]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct  9 09:35:06 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Oct  9 09:35:06 compute-0 systemd[1]: libpod-c18a46ec618c9fe2451099c29161cbacd0659b7c001186920651c9a2816c6036.scope: Deactivated successfully.
Oct  9 09:35:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v49: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 09:35:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:35:06 compute-0 podman[15461]: 2025-10-09 09:35:06.595783671 +0000 UTC m=+0.016737359 container died c18a46ec618c9fe2451099c29161cbacd0659b7c001186920651c9a2816c6036 (image=quay.io/ceph/ceph:v19, name=wonderful_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-15aa94badeef65042f5ab408363ea890afd6a14b33fff3a45a64bcb62e70dee0-merged.mount: Deactivated successfully.
Oct  9 09:35:06 compute-0 podman[15461]: 2025-10-09 09:35:06.613563383 +0000 UTC m=+0.034517073 container remove c18a46ec618c9fe2451099c29161cbacd0659b7c001186920651c9a2816c6036 (image=quay.io/ceph/ceph:v19, name=wonderful_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:35:06 compute-0 systemd[1]: libpod-conmon-c18a46ec618c9fe2451099c29161cbacd0659b7c001186920651c9a2816c6036.scope: Deactivated successfully.
Oct  9 09:35:07 compute-0 python3[15547]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 09:35:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct  9 09:35:07 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Oct  9 09:35:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:35:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Oct  9 09:35:07 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Oct  9 09:35:07 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  9 09:35:07 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:35:07 compute-0 python3[15618]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002507.1573262-34198-60864102505215/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:35:07 compute-0 python3[15720]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 09:35:08 compute-0 python3[15795]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002507.7900662-34212-272536103538091/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=13907672e6dfb128d104af1ed4a990fe5df6f7c1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:35:08 compute-0 python3[15845]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:08 compute-0 podman[15846]: 2025-10-09 09:35:08.546926979 +0000 UTC m=+0.027573459 container create 4edcae30ead54a0518ade9b792cd1914314d769414a0b9e6c5f2b43a2238cdbb (image=quay.io/ceph/ceph:v19, name=modest_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:08 compute-0 ceph-mon[4497]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Oct  9 09:35:08 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:35:08 compute-0 systemd[1]: Started libpod-conmon-4edcae30ead54a0518ade9b792cd1914314d769414a0b9e6c5f2b43a2238cdbb.scope.
Oct  9 09:35:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v51: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:08 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6da6cb176b0c28a19caa2906d5dd953d8d78240db1aac6869a6ba9761bdaa0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6da6cb176b0c28a19caa2906d5dd953d8d78240db1aac6869a6ba9761bdaa0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6da6cb176b0c28a19caa2906d5dd953d8d78240db1aac6869a6ba9761bdaa0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.1f( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.1b( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.1e( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.a( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.9( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.6( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.4( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.c( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.d( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.e( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.10( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.13( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.15( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.19( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 22 pg[2.1( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:08 compute-0 podman[15846]: 2025-10-09 09:35:08.602023353 +0000 UTC m=+0.082669823 container init 4edcae30ead54a0518ade9b792cd1914314d769414a0b9e6c5f2b43a2238cdbb (image=quay.io/ceph/ceph:v19, name=modest_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:08 compute-0 podman[15846]: 2025-10-09 09:35:08.610826301 +0000 UTC m=+0.091472771 container start 4edcae30ead54a0518ade9b792cd1914314d769414a0b9e6c5f2b43a2238cdbb (image=quay.io/ceph/ceph:v19, name=modest_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  9 09:35:08 compute-0 podman[15846]: 2025-10-09 09:35:08.6127562 +0000 UTC m=+0.093402670 container attach 4edcae30ead54a0518ade9b792cd1914314d769414a0b9e6c5f2b43a2238cdbb (image=quay.io/ceph/ceph:v19, name=modest_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:08 compute-0 podman[15846]: 2025-10-09 09:35:08.535880071 +0000 UTC m=+0.016526562 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Oct  9 09:35:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 09:35:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  9 09:35:08 compute-0 modest_herschel[15858]: 
Oct  9 09:35:08 compute-0 modest_herschel[15858]: [global]
Oct  9 09:35:08 compute-0 modest_herschel[15858]: #011fsid = 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:35:08 compute-0 modest_herschel[15858]: #011mon_host = 192.168.122.100
Oct  9 09:35:08 compute-0 systemd[1]: libpod-4edcae30ead54a0518ade9b792cd1914314d769414a0b9e6c5f2b43a2238cdbb.scope: Deactivated successfully.
Oct  9 09:35:08 compute-0 podman[15846]: 2025-10-09 09:35:08.884711385 +0000 UTC m=+0.365357855 container died 4edcae30ead54a0518ade9b792cd1914314d769414a0b9e6c5f2b43a2238cdbb (image=quay.io/ceph/ceph:v19, name=modest_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:35:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee6da6cb176b0c28a19caa2906d5dd953d8d78240db1aac6869a6ba9761bdaa0-merged.mount: Deactivated successfully.
Oct  9 09:35:08 compute-0 podman[15846]: 2025-10-09 09:35:08.901951732 +0000 UTC m=+0.382598202 container remove 4edcae30ead54a0518ade9b792cd1914314d769414a0b9e6c5f2b43a2238cdbb (image=quay.io/ceph/ceph:v19, name=modest_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:08 compute-0 systemd[1]: libpod-conmon-4edcae30ead54a0518ade9b792cd1914314d769414a0b9e6c5f2b43a2238cdbb.scope: Deactivated successfully.
Oct  9 09:35:09 compute-0 python3[15918]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:09 compute-0 podman[15919]: 2025-10-09 09:35:09.163111684 +0000 UTC m=+0.025842758 container create c2dd0844c4f38186e8cefc8683c7420fb66d82876c3f7823bed3622d0aa18f3d (image=quay.io/ceph/ceph:v19, name=busy_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:35:09 compute-0 systemd[1]: Started libpod-conmon-c2dd0844c4f38186e8cefc8683c7420fb66d82876c3f7823bed3622d0aa18f3d.scope.
Oct  9 09:35:09 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c198b7e90ae40296e9f665de611adeae68d84b596adb796d0162596580d7d7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c198b7e90ae40296e9f665de611adeae68d84b596adb796d0162596580d7d7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c198b7e90ae40296e9f665de611adeae68d84b596adb796d0162596580d7d7f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:09 compute-0 podman[15919]: 2025-10-09 09:35:09.239016751 +0000 UTC m=+0.101747815 container init c2dd0844c4f38186e8cefc8683c7420fb66d82876c3f7823bed3622d0aa18f3d (image=quay.io/ceph/ceph:v19, name=busy_wilson, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 09:35:09 compute-0 podman[15919]: 2025-10-09 09:35:09.242951941 +0000 UTC m=+0.105683015 container start c2dd0844c4f38186e8cefc8683c7420fb66d82876c3f7823bed3622d0aa18f3d (image=quay.io/ceph/ceph:v19, name=busy_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:09 compute-0 podman[15919]: 2025-10-09 09:35:09.244108581 +0000 UTC m=+0.106839706 container attach c2dd0844c4f38186e8cefc8683c7420fb66d82876c3f7823bed3622d0aa18f3d (image=quay.io/ceph/ceph:v19, name=busy_wilson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:35:09 compute-0 podman[15919]: 2025-10-09 09:35:09.152813988 +0000 UTC m=+0.015545072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct  9 09:35:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Oct  9 09:35:09 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Oct  9 09:35:09 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 09:35:09 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.1e( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.d( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.c( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.a( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.1b( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.4( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.6( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.1( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.10( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.1f( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.e( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.13( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.15( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.19( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 23 pg[2.9( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22) [1] r=0 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:09 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.d deep-scrub starts
Oct  9 09:35:09 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.d deep-scrub ok
Oct  9 09:35:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Oct  9 09:35:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3070980083' entity='client.admin' 
Oct  9 09:35:09 compute-0 busy_wilson[15931]: set ssl_option
Oct  9 09:35:09 compute-0 systemd[1]: libpod-c2dd0844c4f38186e8cefc8683c7420fb66d82876c3f7823bed3622d0aa18f3d.scope: Deactivated successfully.
Oct  9 09:35:09 compute-0 podman[15956]: 2025-10-09 09:35:09.632235216 +0000 UTC m=+0.017288037 container died c2dd0844c4f38186e8cefc8683c7420fb66d82876c3f7823bed3622d0aa18f3d (image=quay.io/ceph/ceph:v19, name=busy_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 09:35:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c198b7e90ae40296e9f665de611adeae68d84b596adb796d0162596580d7d7f-merged.mount: Deactivated successfully.
Oct  9 09:35:09 compute-0 podman[15956]: 2025-10-09 09:35:09.649068034 +0000 UTC m=+0.034120855 container remove c2dd0844c4f38186e8cefc8683c7420fb66d82876c3f7823bed3622d0aa18f3d (image=quay.io/ceph/ceph:v19, name=busy_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:09 compute-0 systemd[1]: libpod-conmon-c2dd0844c4f38186e8cefc8683c7420fb66d82876c3f7823bed3622d0aa18f3d.scope: Deactivated successfully.
Oct  9 09:35:09 compute-0 python3[15993]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:09 compute-0 podman[15994]: 2025-10-09 09:35:09.91038951 +0000 UTC m=+0.026522589 container create 4327a49150d063ddaa2f2afabe4d61585f617c16400bb8c1a80bb0f49242f5d8 (image=quay.io/ceph/ceph:v19, name=loving_rhodes, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:35:09 compute-0 systemd[1]: Started libpod-conmon-4327a49150d063ddaa2f2afabe4d61585f617c16400bb8c1a80bb0f49242f5d8.scope.
Oct  9 09:35:09 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0af5d67a8a59b97811605839b46a7b66730459f4d1410844ede57de8ac0ec9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0af5d67a8a59b97811605839b46a7b66730459f4d1410844ede57de8ac0ec9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0af5d67a8a59b97811605839b46a7b66730459f4d1410844ede57de8ac0ec9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:09 compute-0 podman[15994]: 2025-10-09 09:35:09.967573329 +0000 UTC m=+0.083706418 container init 4327a49150d063ddaa2f2afabe4d61585f617c16400bb8c1a80bb0f49242f5d8 (image=quay.io/ceph/ceph:v19, name=loving_rhodes, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Oct  9 09:35:09 compute-0 podman[15994]: 2025-10-09 09:35:09.971534829 +0000 UTC m=+0.087667898 container start 4327a49150d063ddaa2f2afabe4d61585f617c16400bb8c1a80bb0f49242f5d8 (image=quay.io/ceph/ceph:v19, name=loving_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:09 compute-0 podman[15994]: 2025-10-09 09:35:09.972489078 +0000 UTC m=+0.088622147 container attach 4327a49150d063ddaa2f2afabe4d61585f617c16400bb8c1a80bb0f49242f5d8 (image=quay.io/ceph/ceph:v19, name=loving_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 09:35:09 compute-0 podman[15994]: 2025-10-09 09:35:09.899602141 +0000 UTC m=+0.015735240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:10 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14223 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:35:10 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:10 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 09:35:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:10 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Oct  9 09:35:10 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Oct  9 09:35:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 09:35:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:10 compute-0 loving_rhodes[16006]: Scheduled rgw.rgw update...
Oct  9 09:35:10 compute-0 loving_rhodes[16006]: Scheduled ingress.rgw.default update...
Oct  9 09:35:10 compute-0 systemd[1]: libpod-4327a49150d063ddaa2f2afabe4d61585f617c16400bb8c1a80bb0f49242f5d8.scope: Deactivated successfully.
Oct  9 09:35:10 compute-0 conmon[16006]: conmon 4327a49150d063ddaa2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4327a49150d063ddaa2f2afabe4d61585f617c16400bb8c1a80bb0f49242f5d8.scope/container/memory.events
Oct  9 09:35:10 compute-0 podman[15994]: 2025-10-09 09:35:10.261397818 +0000 UTC m=+0.377530888 container died 4327a49150d063ddaa2f2afabe4d61585f617c16400bb8c1a80bb0f49242f5d8 (image=quay.io/ceph/ceph:v19, name=loving_rhodes, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:35:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d0af5d67a8a59b97811605839b46a7b66730459f4d1410844ede57de8ac0ec9-merged.mount: Deactivated successfully.
Oct  9 09:35:10 compute-0 podman[15994]: 2025-10-09 09:35:10.280313113 +0000 UTC m=+0.396446182 container remove 4327a49150d063ddaa2f2afabe4d61585f617c16400bb8c1a80bb0f49242f5d8 (image=quay.io/ceph/ceph:v19, name=loving_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  9 09:35:10 compute-0 systemd[1]: libpod-conmon-4327a49150d063ddaa2f2afabe4d61585f617c16400bb8c1a80bb0f49242f5d8.scope: Deactivated successfully.
Oct  9 09:35:10 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct  9 09:35:10 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct  9 09:35:10 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3070980083' entity='client.admin' 
Oct  9 09:35:10 compute-0 ceph-mon[4497]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:10 compute-0 ceph-mon[4497]: Saving service ingress.rgw.default spec with placement count:2
Oct  9 09:35:10 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:10 compute-0 python3[16115]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 09:35:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v53: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 09:35:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:35:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:10 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:10 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:10 compute-0 python3[16186]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002510.3805854-34231-171225727474650/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:35:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:11 compute-0 python3[16236]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:11 compute-0 podman[16237]: 2025-10-09 09:35:11.290520478 +0000 UTC m=+0.026981034 container create 12d8f70daf02de689c460dfbb2c1be6b112c648fe0a5745ee4dc067cba877802 (image=quay.io/ceph/ceph:v19, name=cranky_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:11 compute-0 systemd[1]: Started libpod-conmon-12d8f70daf02de689c460dfbb2c1be6b112c648fe0a5745ee4dc067cba877802.scope.
Oct  9 09:35:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5445be6421c584a18542b147f3a229a54056a577e514ba47d6105fcd6cb63781/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5445be6421c584a18542b147f3a229a54056a577e514ba47d6105fcd6cb63781/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5445be6421c584a18542b147f3a229a54056a577e514ba47d6105fcd6cb63781/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:11 compute-0 podman[16237]: 2025-10-09 09:35:11.346236489 +0000 UTC m=+0.082697055 container init 12d8f70daf02de689c460dfbb2c1be6b112c648fe0a5745ee4dc067cba877802 (image=quay.io/ceph/ceph:v19, name=cranky_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:35:11 compute-0 podman[16237]: 2025-10-09 09:35:11.350763835 +0000 UTC m=+0.087224401 container start 12d8f70daf02de689c460dfbb2c1be6b112c648fe0a5745ee4dc067cba877802 (image=quay.io/ceph/ceph:v19, name=cranky_allen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:11 compute-0 podman[16237]: 2025-10-09 09:35:11.351733173 +0000 UTC m=+0.088193738 container attach 12d8f70daf02de689c460dfbb2c1be6b112c648fe0a5745ee4dc067cba877802 (image=quay.io/ceph/ceph:v19, name=cranky_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 09:35:11 compute-0 podman[16237]: 2025-10-09 09:35:11.278717123 +0000 UTC m=+0.015177708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:11 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:11 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:11 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:11 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:11 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:11 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:11 compute-0 ceph-mon[4497]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:11 compute-0 ceph-mon[4497]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:11 compute-0 ceph-mon[4497]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:11 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct  9 09:35:11 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14225 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service node-exporter spec with placement *
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Oct  9 09:35:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct  9 09:35:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Oct  9 09:35:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct  9 09:35:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Oct  9 09:35:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct  9 09:35:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Oct  9 09:35:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct  9 09:35:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:11 compute-0 cranky_allen[16249]: Scheduled node-exporter update...
Oct  9 09:35:11 compute-0 cranky_allen[16249]: Scheduled grafana update...
Oct  9 09:35:11 compute-0 cranky_allen[16249]: Scheduled prometheus update...
Oct  9 09:35:11 compute-0 cranky_allen[16249]: Scheduled alertmanager update...
Oct  9 09:35:11 compute-0 systemd[1]: libpod-12d8f70daf02de689c460dfbb2c1be6b112c648fe0a5745ee4dc067cba877802.scope: Deactivated successfully.
Oct  9 09:35:11 compute-0 podman[16237]: 2025-10-09 09:35:11.643875062 +0000 UTC m=+0.380335626 container died 12d8f70daf02de689c460dfbb2c1be6b112c648fe0a5745ee4dc067cba877802 (image=quay.io/ceph/ceph:v19, name=cranky_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 09:35:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5445be6421c584a18542b147f3a229a54056a577e514ba47d6105fcd6cb63781-merged.mount: Deactivated successfully.
Oct  9 09:35:11 compute-0 podman[16237]: 2025-10-09 09:35:11.660559909 +0000 UTC m=+0.397020475 container remove 12d8f70daf02de689c460dfbb2c1be6b112c648fe0a5745ee4dc067cba877802 (image=quay.io/ceph/ceph:v19, name=cranky_allen, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:11 compute-0 systemd[1]: libpod-conmon-12d8f70daf02de689c460dfbb2c1be6b112c648fe0a5745ee4dc067cba877802.scope: Deactivated successfully.
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:11 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:12 compute-0 python3[16309]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:12 compute-0 podman[16310]: 2025-10-09 09:35:12.067852011 +0000 UTC m=+0.027612203 container create 088b5bd29cc65a5dfc41e69ce34f645d92751c832b2b6b13383617e6245c4489 (image=quay.io/ceph/ceph:v19, name=serene_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:35:12 compute-0 systemd[1]: Started libpod-conmon-088b5bd29cc65a5dfc41e69ce34f645d92751c832b2b6b13383617e6245c4489.scope.
Oct  9 09:35:12 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a768eb719fe6208fec250155afd824b04f6f8af27516fe96d5f9b302026c19/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a768eb719fe6208fec250155afd824b04f6f8af27516fe96d5f9b302026c19/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5a768eb719fe6208fec250155afd824b04f6f8af27516fe96d5f9b302026c19/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:12 compute-0 podman[16310]: 2025-10-09 09:35:12.116853101 +0000 UTC m=+0.076613323 container init 088b5bd29cc65a5dfc41e69ce34f645d92751c832b2b6b13383617e6245c4489 (image=quay.io/ceph/ceph:v19, name=serene_blackburn, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:35:12 compute-0 podman[16310]: 2025-10-09 09:35:12.120587462 +0000 UTC m=+0.080347655 container start 088b5bd29cc65a5dfc41e69ce34f645d92751c832b2b6b13383617e6245c4489 (image=quay.io/ceph/ceph:v19, name=serene_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:35:12 compute-0 podman[16310]: 2025-10-09 09:35:12.121610061 +0000 UTC m=+0.081370252 container attach 088b5bd29cc65a5dfc41e69ce34f645d92751c832b2b6b13383617e6245c4489 (image=quay.io/ceph/ceph:v19, name=serene_blackburn, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:12 compute-0 podman[16310]: 2025-10-09 09:35:12.058070899 +0000 UTC m=+0.017831121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:35:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v54: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:12 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 6f3ef70f-7be0-4f51-8326-9a8ade6badad (Updating mon deployment (+2 -> 3))
Oct  9 09:35:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 09:35:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:35:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 09:35:12 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 09:35:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:12 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:12 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Oct  9 09:35:12 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Oct  9 09:35:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Oct  9 09:35:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2266537364' entity='client.admin' 
Oct  9 09:35:12 compute-0 systemd[1]: libpod-088b5bd29cc65a5dfc41e69ce34f645d92751c832b2b6b13383617e6245c4489.scope: Deactivated successfully.
Oct  9 09:35:12 compute-0 podman[16310]: 2025-10-09 09:35:12.393386349 +0000 UTC m=+0.353146531 container died 088b5bd29cc65a5dfc41e69ce34f645d92751c832b2b6b13383617e6245c4489 (image=quay.io/ceph/ceph:v19, name=serene_blackburn, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5a768eb719fe6208fec250155afd824b04f6f8af27516fe96d5f9b302026c19-merged.mount: Deactivated successfully.
Oct  9 09:35:12 compute-0 podman[16310]: 2025-10-09 09:35:12.409885809 +0000 UTC m=+0.369646001 container remove 088b5bd29cc65a5dfc41e69ce34f645d92751c832b2b6b13383617e6245c4489 (image=quay.io/ceph/ceph:v19, name=serene_blackburn, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:12 compute-0 systemd[1]: libpod-conmon-088b5bd29cc65a5dfc41e69ce34f645d92751c832b2b6b13383617e6245c4489.scope: Deactivated successfully.
Oct  9 09:35:12 compute-0 ceph-mon[4497]: Saving service node-exporter spec with placement *
Oct  9 09:35:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:12 compute-0 ceph-mon[4497]: Saving service grafana spec with placement compute-0;count:1
Oct  9 09:35:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:12 compute-0 ceph-mon[4497]: Saving service prometheus spec with placement compute-0;count:1
Oct  9 09:35:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:12 compute-0 ceph-mon[4497]: Saving service alertmanager spec with placement compute-0;count:1
Oct  9 09:35:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:12 compute-0 ceph-mon[4497]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:12 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:35:12 compute-0 ceph-mon[4497]: Deploying daemon mon.compute-2 on compute-2
Oct  9 09:35:12 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/2266537364' entity='client.admin' 
Oct  9 09:35:12 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct  9 09:35:12 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct  9 09:35:12 compute-0 python3[16383]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:12 compute-0 podman[16384]: 2025-10-09 09:35:12.66104326 +0000 UTC m=+0.023859748 container create 45193aeab71c4025c85ce205dd4b9a35bb299b63521f9cd4a6dcbe83a5a68163 (image=quay.io/ceph/ceph:v19, name=quizzical_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:35:12 compute-0 systemd[1]: Started libpod-conmon-45193aeab71c4025c85ce205dd4b9a35bb299b63521f9cd4a6dcbe83a5a68163.scope.
Oct  9 09:35:12 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6d4ddd3d9eb227771fca0bacddaa0c028e143e022e8baee357da08f7174a31/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6d4ddd3d9eb227771fca0bacddaa0c028e143e022e8baee357da08f7174a31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e6d4ddd3d9eb227771fca0bacddaa0c028e143e022e8baee357da08f7174a31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:12 compute-0 podman[16384]: 2025-10-09 09:35:12.70843147 +0000 UTC m=+0.071247959 container init 45193aeab71c4025c85ce205dd4b9a35bb299b63521f9cd4a6dcbe83a5a68163 (image=quay.io/ceph/ceph:v19, name=quizzical_raman, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 09:35:12 compute-0 podman[16384]: 2025-10-09 09:35:12.711960203 +0000 UTC m=+0.074776691 container start 45193aeab71c4025c85ce205dd4b9a35bb299b63521f9cd4a6dcbe83a5a68163 (image=quay.io/ceph/ceph:v19, name=quizzical_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:12 compute-0 podman[16384]: 2025-10-09 09:35:12.712903141 +0000 UTC m=+0.075719629 container attach 45193aeab71c4025c85ce205dd4b9a35bb299b63521f9cd4a6dcbe83a5a68163 (image=quay.io/ceph/ceph:v19, name=quizzical_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:12 compute-0 podman[16384]: 2025-10-09 09:35:12.651670178 +0000 UTC m=+0.014486666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Oct  9 09:35:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3921635866' entity='client.admin' 
Oct  9 09:35:12 compute-0 systemd[1]: libpod-45193aeab71c4025c85ce205dd4b9a35bb299b63521f9cd4a6dcbe83a5a68163.scope: Deactivated successfully.
Oct  9 09:35:13 compute-0 podman[16421]: 2025-10-09 09:35:13.014043034 +0000 UTC m=+0.014498698 container died 45193aeab71c4025c85ce205dd4b9a35bb299b63521f9cd4a6dcbe83a5a68163 (image=quay.io/ceph/ceph:v19, name=quizzical_raman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 09:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e6d4ddd3d9eb227771fca0bacddaa0c028e143e022e8baee357da08f7174a31-merged.mount: Deactivated successfully.
Oct  9 09:35:13 compute-0 podman[16421]: 2025-10-09 09:35:13.029327473 +0000 UTC m=+0.029783127 container remove 45193aeab71c4025c85ce205dd4b9a35bb299b63521f9cd4a6dcbe83a5a68163 (image=quay.io/ceph/ceph:v19, name=quizzical_raman, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:35:13 compute-0 systemd[1]: libpod-conmon-45193aeab71c4025c85ce205dd4b9a35bb299b63521f9cd4a6dcbe83a5a68163.scope: Deactivated successfully.
Oct  9 09:35:13 compute-0 python3[16458]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:13 compute-0 podman[16459]: 2025-10-09 09:35:13.286308125 +0000 UTC m=+0.024640280 container create da480b61f554e7af35e5ae1ef216a2b591980258d837eb63fe4de933fb0eeea4 (image=quay.io/ceph/ceph:v19, name=romantic_pascal, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 09:35:13 compute-0 systemd[1]: Started libpod-conmon-da480b61f554e7af35e5ae1ef216a2b591980258d837eb63fe4de933fb0eeea4.scope.
Oct  9 09:35:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90360a8a4c4a431480d0d8450ad97e76652a2d1df6976a976962e7340dce0db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90360a8a4c4a431480d0d8450ad97e76652a2d1df6976a976962e7340dce0db/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90360a8a4c4a431480d0d8450ad97e76652a2d1df6976a976962e7340dce0db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:13 compute-0 podman[16459]: 2025-10-09 09:35:13.328991543 +0000 UTC m=+0.067323708 container init da480b61f554e7af35e5ae1ef216a2b591980258d837eb63fe4de933fb0eeea4 (image=quay.io/ceph/ceph:v19, name=romantic_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:13 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  9 09:35:13 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  9 09:35:13 compute-0 podman[16459]: 2025-10-09 09:35:13.334230491 +0000 UTC m=+0.072562646 container start da480b61f554e7af35e5ae1ef216a2b591980258d837eb63fe4de933fb0eeea4 (image=quay.io/ceph/ceph:v19, name=romantic_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:35:13 compute-0 podman[16459]: 2025-10-09 09:35:13.337197896 +0000 UTC m=+0.075530061 container attach da480b61f554e7af35e5ae1ef216a2b591980258d837eb63fe4de933fb0eeea4 (image=quay.io/ceph/ceph:v19, name=romantic_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:13 compute-0 podman[16459]: 2025-10-09 09:35:13.276148879 +0000 UTC m=+0.014481034 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Oct  9 09:35:13 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4272592449' entity='client.admin' 
Oct  9 09:35:13 compute-0 systemd[1]: libpod-da480b61f554e7af35e5ae1ef216a2b591980258d837eb63fe4de933fb0eeea4.scope: Deactivated successfully.
Oct  9 09:35:13 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct  9 09:35:13 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct  9 09:35:13 compute-0 podman[16496]: 2025-10-09 09:35:13.637854157 +0000 UTC m=+0.017867640 container died da480b61f554e7af35e5ae1ef216a2b591980258d837eb63fe4de933fb0eeea4 (image=quay.io/ceph/ceph:v19, name=romantic_pascal, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c90360a8a4c4a431480d0d8450ad97e76652a2d1df6976a976962e7340dce0db-merged.mount: Deactivated successfully.
Oct  9 09:35:13 compute-0 podman[16496]: 2025-10-09 09:35:13.652367993 +0000 UTC m=+0.032381466 container remove da480b61f554e7af35e5ae1ef216a2b591980258d837eb63fe4de933fb0eeea4 (image=quay.io/ceph/ceph:v19, name=romantic_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:13 compute-0 systemd[1]: libpod-conmon-da480b61f554e7af35e5ae1ef216a2b591980258d837eb63fe4de933fb0eeea4.scope: Deactivated successfully.
Oct  9 09:35:13 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3921635866' entity='client.admin' 
Oct  9 09:35:13 compute-0 ceph-mon[4497]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  9 09:35:13 compute-0 ceph-mon[4497]: Cluster is now healthy
Oct  9 09:35:13 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/4272592449' entity='client.admin' 
Oct  9 09:35:14 compute-0 python3[16532]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v55: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 09:35:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 09:35:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 09:35:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:14 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Oct  9 09:35:14 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Oct  9 09:35:14 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/687614102; not ready for session (expect reconnect)
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:35:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:35:14 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 09:35:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 09:35:14 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct  9 09:35:14 compute-0 ceph-mon[4497]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:35:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:35:14 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 09:35:14 compute-0 python3[16567]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.lwqgfy/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:14 compute-0 podman[16568]: 2025-10-09 09:35:14.488679762 +0000 UTC m=+0.024738005 container create 3a3bf5aa8906102455bbbcd136eaac80818d89ed6df135e4418efc398c1680ab (image=quay.io/ceph/ceph:v19, name=nice_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:14 compute-0 systemd[1]: Started libpod-conmon-3a3bf5aa8906102455bbbcd136eaac80818d89ed6df135e4418efc398c1680ab.scope.
Oct  9 09:35:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73308e107a2ed64393080c9b5557da01e77a012526b275e316c356bf307a0cc5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73308e107a2ed64393080c9b5557da01e77a012526b275e316c356bf307a0cc5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73308e107a2ed64393080c9b5557da01e77a012526b275e316c356bf307a0cc5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:14 compute-0 podman[16568]: 2025-10-09 09:35:14.555057024 +0000 UTC m=+0.091115298 container init 3a3bf5aa8906102455bbbcd136eaac80818d89ed6df135e4418efc398c1680ab (image=quay.io/ceph/ceph:v19, name=nice_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct  9 09:35:14 compute-0 podman[16568]: 2025-10-09 09:35:14.558954603 +0000 UTC m=+0.095012858 container start 3a3bf5aa8906102455bbbcd136eaac80818d89ed6df135e4418efc398c1680ab (image=quay.io/ceph/ceph:v19, name=nice_yonath, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:14 compute-0 podman[16568]: 2025-10-09 09:35:14.560117176 +0000 UTC m=+0.096175429 container attach 3a3bf5aa8906102455bbbcd136eaac80818d89ed6df135e4418efc398c1680ab (image=quay.io/ceph/ceph:v19, name=nice_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  9 09:35:14 compute-0 podman[16568]: 2025-10-09 09:35:14.478590077 +0000 UTC m=+0.014648341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:14 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct  9 09:35:14 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 09:35:14 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 09:35:15 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 09:35:15 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/687614102; not ready for session (expect reconnect)
Oct  9 09:35:15 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:35:15 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:35:15 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 09:35:15 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct  9 09:35:15 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct  9 09:35:15 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:35:15 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 09:35:15 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 09:35:15 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:15 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:15 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:15 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 09:35:15 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  9 09:35:16 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 09:35:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v56: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:16 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/687614102; not ready for session (expect reconnect)
Oct  9 09:35:16 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:35:16 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:35:16 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 09:35:16 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct  9 09:35:16 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct  9 09:35:16 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:16 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:16 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:16 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  9 09:35:17 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/687614102; not ready for session (expect reconnect)
Oct  9 09:35:17 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:35:17 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 09:35:17 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:35:17 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct  9 09:35:17 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct  9 09:35:17 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 09:35:17 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:17 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:17 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:17 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  9 09:35:17 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 09:35:17 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 09:35:18 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 09:35:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v57: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:18 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/687614102; not ready for session (expect reconnect)
Oct  9 09:35:18 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:35:18 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:35:18 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 09:35:18 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct  9 09:35:18 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct  9 09:35:18 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:18 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:18 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:18 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Oct  9 09:35:19 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/687614102; not ready for session (expect reconnect)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:35:19 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:35:19 compute-0 ceph-mon[4497]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : monmap epoch 2
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : last_changed 2025-10-09T09:35:14.415832+0000
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : created 2025-10-09T09:33:38.201593+0000
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap 
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.lwqgfy(active, since 83s)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 6f3ef70f-7be0-4f51-8326-9a8ade6badad (Updating mon deployment (+2 -> 3))
Oct  9 09:35:19 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 6f3ef70f-7be0-4f51-8326-9a8ade6badad (Updating mon deployment (+2 -> 3)) in 7 seconds
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 3718d6a8-df61-413b-9eed-62c97445b346 (Updating mgr deployment (+2 -> 3))
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:19 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.takdnm on compute-2
Oct  9 09:35:19 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.takdnm on compute-2
Oct  9 09:35:19 compute-0 ceph-mon[4497]: Deploying daemon mon.compute-1 on compute-1
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0 calling monitor election
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-2 calling monitor election
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: overall HEALTH_OK
Oct  9 09:35:19 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:35:19 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 09:35:19 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct  9 09:35:19 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 09:35:19 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:19 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Oct  9 09:35:19 compute-0 ceph-mon[4497]: paxos.0).electionLogic(10) init, last seen epoch 10
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:19 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:35:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:35:19 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 09:35:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v58: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:20 compute-0 ceph-mgr[4772]: mgr.server handle_report got status from non-daemon mon.compute-2
Oct  9 09:35:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:20.416+0000 7f4e49fa3640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Oct  9 09:35:20 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:20 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:20 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:20 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 09:35:20 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:20 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:20 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:20 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 4 completed events
Oct  9 09:35:20 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:35:20 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:20 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:21 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:21 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:21 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:21 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:21 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:21 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 09:35:22 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:22 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v59: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:22 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:22 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:22 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:22 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 09:35:23 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:23 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:23 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:23 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 09:35:23 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:23 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:23 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:23 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v60: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:24 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:24 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Oct  9 09:35:24 compute-0 ceph-mon[4497]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : monmap epoch 3
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : last_changed 2025-10-09T09:35:19.619597+0000
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : created 2025-10-09T09:33:38.201593+0000
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : election_strategy: 1
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap 
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.lwqgfy(active, since 88s)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : overall HEALTH_OK
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:24 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.etokpp on compute-1
Oct  9 09:35:24 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.etokpp on compute-1
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0 calling monitor election
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-2 calling monitor election
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-1 calling monitor election
Oct  9 09:35:24 compute-0 ceph-mon[4497]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  9 09:35:24 compute-0 ceph-mon[4497]: overall HEALTH_OK
Oct  9 09:35:24 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:35:24 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 09:35:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.lwqgfy/server_addr}] v 0)
Oct  9 09:35:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3098806995' entity='client.admin' 
Oct  9 09:35:25 compute-0 systemd[1]: libpod-3a3bf5aa8906102455bbbcd136eaac80818d89ed6df135e4418efc398c1680ab.scope: Deactivated successfully.
Oct  9 09:35:25 compute-0 podman[16568]: 2025-10-09 09:35:25.244840167 +0000 UTC m=+10.780898431 container died 3a3bf5aa8906102455bbbcd136eaac80818d89ed6df135e4418efc398c1680ab (image=quay.io/ceph/ceph:v19, name=nice_yonath, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-73308e107a2ed64393080c9b5557da01e77a012526b275e316c356bf307a0cc5-merged.mount: Deactivated successfully.
Oct  9 09:35:25 compute-0 podman[16568]: 2025-10-09 09:35:25.272170056 +0000 UTC m=+10.808228311 container remove 3a3bf5aa8906102455bbbcd136eaac80818d89ed6df135e4418efc398c1680ab (image=quay.io/ceph/ceph:v19, name=nice_yonath, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:35:25 compute-0 systemd[1]: libpod-conmon-3a3bf5aa8906102455bbbcd136eaac80818d89ed6df135e4418efc398c1680ab.scope: Deactivated successfully.
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/462493387; not ready for session (expect reconnect)
Oct  9 09:35:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:25 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:35:25 compute-0 python3[16640]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard//server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:35:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:35:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 09:35:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 3718d6a8-df61-413b-9eed-62c97445b346 (Updating mgr deployment (+2 -> 3))
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 3718d6a8-df61-413b-9eed-62c97445b346 (Updating mgr deployment (+2 -> 3)) in 7 seconds
Oct  9 09:35:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Oct  9 09:35:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev e462c521-befe-424d-89d3-c4736a9acd51 (Updating crash deployment (+1 -> 3))
Oct  9 09:35:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  9 09:35:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:35:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 09:35:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:25 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Oct  9 09:35:25 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Oct  9 09:35:25 compute-0 podman[16641]: 2025-10-09 09:35:25.981773439 +0000 UTC m=+0.029281159 container create cc5fc4f216c91a2ffd12a3bd449e9924ed771cc2c937d39bf0a4162b68e57387 (image=quay.io/ceph/ceph:v19, name=objective_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 09:35:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:26 compute-0 systemd[1]: Started libpod-conmon-cc5fc4f216c91a2ffd12a3bd449e9924ed771cc2c937d39bf0a4162b68e57387.scope.
Oct  9 09:35:26 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8ca2a2fcd13f2bc90b0efc089ea72fb254fad5b8ae4870c56c1983b3df5343/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8ca2a2fcd13f2bc90b0efc089ea72fb254fad5b8ae4870c56c1983b3df5343/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8ca2a2fcd13f2bc90b0efc089ea72fb254fad5b8ae4870c56c1983b3df5343/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:26 compute-0 podman[16641]: 2025-10-09 09:35:26.041259339 +0000 UTC m=+0.088767049 container init cc5fc4f216c91a2ffd12a3bd449e9924ed771cc2c937d39bf0a4162b68e57387 (image=quay.io/ceph/ceph:v19, name=objective_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 09:35:26 compute-0 podman[16641]: 2025-10-09 09:35:26.045205659 +0000 UTC m=+0.092713370 container start cc5fc4f216c91a2ffd12a3bd449e9924ed771cc2c937d39bf0a4162b68e57387 (image=quay.io/ceph/ceph:v19, name=objective_cerf, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:35:26 compute-0 podman[16641]: 2025-10-09 09:35:26.046212378 +0000 UTC m=+0.093720088 container attach cc5fc4f216c91a2ffd12a3bd449e9924ed771cc2c937d39bf0a4162b68e57387 (image=quay.io/ceph/ceph:v19, name=objective_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:35:26 compute-0 podman[16641]: 2025-10-09 09:35:25.971872041 +0000 UTC m=+0.019379761 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:26 compute-0 ceph-mon[4497]: Deploying daemon mgr.compute-1.etokpp on compute-1
Oct  9 09:35:26 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3098806995' entity='client.admin' 
Oct  9 09:35:26 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:35:26 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 09:35:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard//server_addr}] v 0)
Oct  9 09:35:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2874472706' entity='client.admin' 
Oct  9 09:35:26 compute-0 systemd[1]: libpod-cc5fc4f216c91a2ffd12a3bd449e9924ed771cc2c937d39bf0a4162b68e57387.scope: Deactivated successfully.
Oct  9 09:35:26 compute-0 podman[16641]: 2025-10-09 09:35:26.323905221 +0000 UTC m=+0.371412931 container died cc5fc4f216c91a2ffd12a3bd449e9924ed771cc2c937d39bf0a4162b68e57387 (image=quay.io/ceph/ceph:v19, name=objective_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 09:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8ca2a2fcd13f2bc90b0efc089ea72fb254fad5b8ae4870c56c1983b3df5343-merged.mount: Deactivated successfully.
Oct  9 09:35:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v61: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:26 compute-0 podman[16641]: 2025-10-09 09:35:26.34064322 +0000 UTC m=+0.388150930 container remove cc5fc4f216c91a2ffd12a3bd449e9924ed771cc2c937d39bf0a4162b68e57387 (image=quay.io/ceph/ceph:v19, name=objective_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:26 compute-0 systemd[1]: libpod-conmon-cc5fc4f216c91a2ffd12a3bd449e9924ed771cc2c937d39bf0a4162b68e57387.scope: Deactivated successfully.
Oct  9 09:35:26 compute-0 ceph-mgr[4772]: mgr.server handle_report got status from non-daemon mon.compute-1
Oct  9 09:35:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:26.620+0000 7f4e49fa3640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Oct  9 09:35:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 09:35:27 compute-0 python3[16713]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.takdnm/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev e462c521-befe-424d-89d3-c4736a9acd51 (Updating crash deployment (+1 -> 3))
Oct  9 09:35:27 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event e462c521-befe-424d-89d3-c4736a9acd51 (Updating crash deployment (+1 -> 3)) in 1 seconds
Oct  9 09:35:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Oct  9 09:35:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:35:27 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:35:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:35:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:35:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:27 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:35:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:35:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:27 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:27 compute-0 podman[16714]: 2025-10-09 09:35:27.22999788 +0000 UTC m=+0.027273107 container create 7562fe2c62683d0c7320852406aa4712fd42b7b067a2193b17758e97048d3a33 (image=quay.io/ceph/ceph:v19, name=funny_cori, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:27 compute-0 systemd[1]: Started libpod-conmon-7562fe2c62683d0c7320852406aa4712fd42b7b067a2193b17758e97048d3a33.scope.
Oct  9 09:35:27 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2bf9489cccb1647acb9f495b5f9827649fbcba12d8721c8d45d538ed4130a1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2bf9489cccb1647acb9f495b5f9827649fbcba12d8721c8d45d538ed4130a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e2bf9489cccb1647acb9f495b5f9827649fbcba12d8721c8d45d538ed4130a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 podman[16714]: 2025-10-09 09:35:27.277289632 +0000 UTC m=+0.074564879 container init 7562fe2c62683d0c7320852406aa4712fd42b7b067a2193b17758e97048d3a33 (image=quay.io/ceph/ceph:v19, name=funny_cori, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:27 compute-0 podman[16714]: 2025-10-09 09:35:27.284506868 +0000 UTC m=+0.081782095 container start 7562fe2c62683d0c7320852406aa4712fd42b7b067a2193b17758e97048d3a33 (image=quay.io/ceph/ceph:v19, name=funny_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  9 09:35:27 compute-0 podman[16714]: 2025-10-09 09:35:27.285682672 +0000 UTC m=+0.082957900 container attach 7562fe2c62683d0c7320852406aa4712fd42b7b067a2193b17758e97048d3a33 (image=quay.io/ceph/ceph:v19, name=funny_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:35:27 compute-0 ceph-mon[4497]: Deploying daemon crash.compute-2 on compute-2
Oct  9 09:35:27 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/2874472706' entity='client.admin' 
Oct  9 09:35:27 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:35:27 compute-0 ceph-mon[4497]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:35:27 compute-0 podman[16714]: 2025-10-09 09:35:27.218842999 +0000 UTC m=+0.016118236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.takdnm/server_addr}] v 0)
Oct  9 09:35:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3618703096' entity='client.admin' 
Oct  9 09:35:27 compute-0 systemd[1]: libpod-7562fe2c62683d0c7320852406aa4712fd42b7b067a2193b17758e97048d3a33.scope: Deactivated successfully.
Oct  9 09:35:27 compute-0 conmon[16750]: conmon 7562fe2c62683d0c7320 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7562fe2c62683d0c7320852406aa4712fd42b7b067a2193b17758e97048d3a33.scope/container/memory.events
Oct  9 09:35:27 compute-0 podman[16714]: 2025-10-09 09:35:27.566869 +0000 UTC m=+0.364144227 container died 7562fe2c62683d0c7320852406aa4712fd42b7b067a2193b17758e97048d3a33 (image=quay.io/ceph/ceph:v19, name=funny_cori, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e2bf9489cccb1647acb9f495b5f9827649fbcba12d8721c8d45d538ed4130a1-merged.mount: Deactivated successfully.
Oct  9 09:35:27 compute-0 podman[16714]: 2025-10-09 09:35:27.586623444 +0000 UTC m=+0.383898672 container remove 7562fe2c62683d0c7320852406aa4712fd42b7b067a2193b17758e97048d3a33 (image=quay.io/ceph/ceph:v19, name=funny_cori, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:35:27 compute-0 systemd[1]: libpod-conmon-7562fe2c62683d0c7320852406aa4712fd42b7b067a2193b17758e97048d3a33.scope: Deactivated successfully.
Oct  9 09:35:27 compute-0 podman[16832]: 2025-10-09 09:35:27.60330801 +0000 UTC m=+0.044934156 container create 4891794fdb8c51be5b116651cda799350a69f0175802b86666e6e534854e25c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_panini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:27 compute-0 systemd[1]: Started libpod-conmon-4891794fdb8c51be5b116651cda799350a69f0175802b86666e6e534854e25c6.scope.
Oct  9 09:35:27 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:27 compute-0 podman[16832]: 2025-10-09 09:35:27.653725478 +0000 UTC m=+0.095351634 container init 4891794fdb8c51be5b116651cda799350a69f0175802b86666e6e534854e25c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 09:35:27 compute-0 podman[16832]: 2025-10-09 09:35:27.657681799 +0000 UTC m=+0.099307946 container start 4891794fdb8c51be5b116651cda799350a69f0175802b86666e6e534854e25c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 09:35:27 compute-0 podman[16832]: 2025-10-09 09:35:27.658910255 +0000 UTC m=+0.100536401 container attach 4891794fdb8c51be5b116651cda799350a69f0175802b86666e6e534854e25c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_panini, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:27 compute-0 friendly_panini[16856]: 167 167
Oct  9 09:35:27 compute-0 systemd[1]: libpod-4891794fdb8c51be5b116651cda799350a69f0175802b86666e6e534854e25c6.scope: Deactivated successfully.
Oct  9 09:35:27 compute-0 podman[16832]: 2025-10-09 09:35:27.660623204 +0000 UTC m=+0.102249350 container died 4891794fdb8c51be5b116651cda799350a69f0175802b86666e6e534854e25c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-82168b6243345b85506f5fc060fb98d215015ccbc76d75843299886021ff9d80-merged.mount: Deactivated successfully.
Oct  9 09:35:27 compute-0 podman[16832]: 2025-10-09 09:35:27.582357482 +0000 UTC m=+0.023983648 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:27 compute-0 podman[16832]: 2025-10-09 09:35:27.681807086 +0000 UTC m=+0.123433232 container remove 4891794fdb8c51be5b116651cda799350a69f0175802b86666e6e534854e25c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:27 compute-0 systemd[1]: libpod-conmon-4891794fdb8c51be5b116651cda799350a69f0175802b86666e6e534854e25c6.scope: Deactivated successfully.
Oct  9 09:35:27 compute-0 podman[16904]: 2025-10-09 09:35:27.797282008 +0000 UTC m=+0.029268153 container create 2497990fe7c596ff6ccf2b2073e561ee00734270ce3580847244e6313650f364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:27 compute-0 systemd[1]: Started libpod-conmon-2497990fe7c596ff6ccf2b2073e561ee00734270ce3580847244e6313650f364.scope.
Oct  9 09:35:27 compute-0 python3[16898]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:27 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb38fc15b7bf9d799a12270c6fd68f450fb520b3088b8f8cff4b31d4163a82c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb38fc15b7bf9d799a12270c6fd68f450fb520b3088b8f8cff4b31d4163a82c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb38fc15b7bf9d799a12270c6fd68f450fb520b3088b8f8cff4b31d4163a82c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb38fc15b7bf9d799a12270c6fd68f450fb520b3088b8f8cff4b31d4163a82c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb38fc15b7bf9d799a12270c6fd68f450fb520b3088b8f8cff4b31d4163a82c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 podman[16904]: 2025-10-09 09:35:27.867340706 +0000 UTC m=+0.099326861 container init 2497990fe7c596ff6ccf2b2073e561ee00734270ce3580847244e6313650f364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:35:27 compute-0 podman[16904]: 2025-10-09 09:35:27.872966624 +0000 UTC m=+0.104952769 container start 2497990fe7c596ff6ccf2b2073e561ee00734270ce3580847244e6313650f364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:35:27 compute-0 podman[16904]: 2025-10-09 09:35:27.874336829 +0000 UTC m=+0.106322994 container attach 2497990fe7c596ff6ccf2b2073e561ee00734270ce3580847244e6313650f364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 09:35:27 compute-0 podman[16904]: 2025-10-09 09:35:27.784800455 +0000 UTC m=+0.016786620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:27 compute-0 podman[16920]: 2025-10-09 09:35:27.886061197 +0000 UTC m=+0.032095801 container create 66e8d4b776f0d1355e1bf627583d1775444eda3dc6e019887492ecfeaaf24a4e (image=quay.io/ceph/ceph:v19, name=vigorous_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:27 compute-0 systemd[1]: Started libpod-conmon-66e8d4b776f0d1355e1bf627583d1775444eda3dc6e019887492ecfeaaf24a4e.scope.
Oct  9 09:35:27 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f356d6843505079dd25bbf2a4a724eee1fe3868e48f72b999296a8285c4cb666/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f356d6843505079dd25bbf2a4a724eee1fe3868e48f72b999296a8285c4cb666/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f356d6843505079dd25bbf2a4a724eee1fe3868e48f72b999296a8285c4cb666/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-0 podman[16920]: 2025-10-09 09:35:27.940463242 +0000 UTC m=+0.086497856 container init 66e8d4b776f0d1355e1bf627583d1775444eda3dc6e019887492ecfeaaf24a4e (image=quay.io/ceph/ceph:v19, name=vigorous_golick, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:27 compute-0 podman[16920]: 2025-10-09 09:35:27.945029257 +0000 UTC m=+0.091063860 container start 66e8d4b776f0d1355e1bf627583d1775444eda3dc6e019887492ecfeaaf24a4e (image=quay.io/ceph/ceph:v19, name=vigorous_golick, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:27 compute-0 podman[16920]: 2025-10-09 09:35:27.946112104 +0000 UTC m=+0.092146708 container attach 66e8d4b776f0d1355e1bf627583d1775444eda3dc6e019887492ecfeaaf24a4e (image=quay.io/ceph/ceph:v19, name=vigorous_golick, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 09:35:27 compute-0 podman[16920]: 2025-10-09 09:35:27.873743506 +0000 UTC m=+0.019778131 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:28 compute-0 xenodochial_jones[16917]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:35:28 compute-0 xenodochial_jones[16917]: --> All data devices are unavailable
Oct  9 09:35:28 compute-0 systemd[1]: libpod-2497990fe7c596ff6ccf2b2073e561ee00734270ce3580847244e6313650f364.scope: Deactivated successfully.
Oct  9 09:35:28 compute-0 conmon[16917]: conmon 2497990fe7c596ff6ccf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2497990fe7c596ff6ccf2b2073e561ee00734270ce3580847244e6313650f364.scope/container/memory.events
Oct  9 09:35:28 compute-0 podman[16904]: 2025-10-09 09:35:28.135066351 +0000 UTC m=+0.367052496 container died 2497990fe7c596ff6ccf2b2073e561ee00734270ce3580847244e6313650f364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:28 compute-0 podman[16904]: 2025-10-09 09:35:28.154991001 +0000 UTC m=+0.386977145 container remove 2497990fe7c596ff6ccf2b2073e561ee00734270ce3580847244e6313650f364 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:35:28 compute-0 systemd[1]: libpod-conmon-2497990fe7c596ff6ccf2b2073e561ee00734270ce3580847244e6313650f364.scope: Deactivated successfully.
Oct  9 09:35:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct  9 09:35:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 09:35:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v62: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"} v 0)
Oct  9 09:35:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]: dispatch
Oct  9 09:35:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct  9 09:35:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]': finished
Oct  9 09:35:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Oct  9 09:35:28 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Oct  9 09:35:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:35:28 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:35:28 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:35:28 compute-0 podman[17059]: 2025-10-09 09:35:28.546645904 +0000 UTC m=+0.025867793 container create 37f91078cc28605b1a3bbafd9442d1f1c8bb39def96470309ebdaa01d1352c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:35:28 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3618703096' entity='client.admin' 
Oct  9 09:35:28 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 09:35:28 compute-0 ceph-mon[4497]: from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]: dispatch
Oct  9 09:35:28 compute-0 ceph-mon[4497]: from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]': finished
Oct  9 09:35:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 09:35:28 compute-0 vigorous_golick[16934]: module 'dashboard' is already disabled
Oct  9 09:35:28 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.lwqgfy(active, since 92s)
Oct  9 09:35:28 compute-0 systemd[1]: Started libpod-conmon-37f91078cc28605b1a3bbafd9442d1f1c8bb39def96470309ebdaa01d1352c22.scope.
Oct  9 09:35:28 compute-0 systemd[1]: libpod-66e8d4b776f0d1355e1bf627583d1775444eda3dc6e019887492ecfeaaf24a4e.scope: Deactivated successfully.
Oct  9 09:35:28 compute-0 podman[16920]: 2025-10-09 09:35:28.574307881 +0000 UTC m=+0.720342485 container died 66e8d4b776f0d1355e1bf627583d1775444eda3dc6e019887492ecfeaaf24a4e (image=quay.io/ceph/ceph:v19, name=vigorous_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 09:35:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fb38fc15b7bf9d799a12270c6fd68f450fb520b3088b8f8cff4b31d4163a82c-merged.mount: Deactivated successfully.
Oct  9 09:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f356d6843505079dd25bbf2a4a724eee1fe3868e48f72b999296a8285c4cb666-merged.mount: Deactivated successfully.
Oct  9 09:35:28 compute-0 podman[16920]: 2025-10-09 09:35:28.597050539 +0000 UTC m=+0.743085142 container remove 66e8d4b776f0d1355e1bf627583d1775444eda3dc6e019887492ecfeaaf24a4e (image=quay.io/ceph/ceph:v19, name=vigorous_golick, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:28 compute-0 podman[17059]: 2025-10-09 09:35:28.598989029 +0000 UTC m=+0.078210928 container init 37f91078cc28605b1a3bbafd9442d1f1c8bb39def96470309ebdaa01d1352c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 09:35:28 compute-0 podman[17059]: 2025-10-09 09:35:28.605534262 +0000 UTC m=+0.084756151 container start 37f91078cc28605b1a3bbafd9442d1f1c8bb39def96470309ebdaa01d1352c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:35:28 compute-0 systemd[1]: libpod-conmon-66e8d4b776f0d1355e1bf627583d1775444eda3dc6e019887492ecfeaaf24a4e.scope: Deactivated successfully.
Oct  9 09:35:28 compute-0 podman[17059]: 2025-10-09 09:35:28.60718964 +0000 UTC m=+0.086411550 container attach 37f91078cc28605b1a3bbafd9442d1f1c8bb39def96470309ebdaa01d1352c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 09:35:28 compute-0 priceless_bell[17073]: 167 167
Oct  9 09:35:28 compute-0 systemd[1]: libpod-37f91078cc28605b1a3bbafd9442d1f1c8bb39def96470309ebdaa01d1352c22.scope: Deactivated successfully.
Oct  9 09:35:28 compute-0 conmon[17073]: conmon 37f91078cc28605b1a3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37f91078cc28605b1a3bbafd9442d1f1c8bb39def96470309ebdaa01d1352c22.scope/container/memory.events
Oct  9 09:35:28 compute-0 podman[17059]: 2025-10-09 09:35:28.609071563 +0000 UTC m=+0.088293451 container died 37f91078cc28605b1a3bbafd9442d1f1c8bb39def96470309ebdaa01d1352c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Oct  9 09:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fd936b251588d49804779c9d87931396135b3b64551291e8e517a469ef64035-merged.mount: Deactivated successfully.
Oct  9 09:35:28 compute-0 podman[17059]: 2025-10-09 09:35:28.629611847 +0000 UTC m=+0.108833736 container remove 37f91078cc28605b1a3bbafd9442d1f1c8bb39def96470309ebdaa01d1352c22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:35:28 compute-0 podman[17059]: 2025-10-09 09:35:28.53642207 +0000 UTC m=+0.015643979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:28 compute-0 systemd[1]: libpod-conmon-37f91078cc28605b1a3bbafd9442d1f1c8bb39def96470309ebdaa01d1352c22.scope: Deactivated successfully.
Oct  9 09:35:28 compute-0 podman[17130]: 2025-10-09 09:35:28.755070033 +0000 UTC m=+0.036710649 container create 9aeeee65d3dd6203149e765b81ab47548d1a1fdede4bf5b49ac94991622b0f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jang, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:28 compute-0 systemd[1]: Started libpod-conmon-9aeeee65d3dd6203149e765b81ab47548d1a1fdede4bf5b49ac94991622b0f99.scope.
Oct  9 09:35:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b2b74db2b2fae54f63a42cb351749192edf41ff0e5903fe2c1a55b88c59e7f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b2b74db2b2fae54f63a42cb351749192edf41ff0e5903fe2c1a55b88c59e7f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b2b74db2b2fae54f63a42cb351749192edf41ff0e5903fe2c1a55b88c59e7f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b2b74db2b2fae54f63a42cb351749192edf41ff0e5903fe2c1a55b88c59e7f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:28 compute-0 podman[17130]: 2025-10-09 09:35:28.816730181 +0000 UTC m=+0.098370818 container init 9aeeee65d3dd6203149e765b81ab47548d1a1fdede4bf5b49ac94991622b0f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:28 compute-0 podman[17130]: 2025-10-09 09:35:28.822836135 +0000 UTC m=+0.104476752 container start 9aeeee65d3dd6203149e765b81ab47548d1a1fdede4bf5b49ac94991622b0f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:35:28 compute-0 podman[17130]: 2025-10-09 09:35:28.823919132 +0000 UTC m=+0.105559748 container attach 9aeeee65d3dd6203149e765b81ab47548d1a1fdede4bf5b49ac94991622b0f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:28 compute-0 python3[17132]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:28 compute-0 podman[17130]: 2025-10-09 09:35:28.737791725 +0000 UTC m=+0.019432361 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:28 compute-0 podman[17149]: 2025-10-09 09:35:28.865836106 +0000 UTC m=+0.027196550 container create e8d58dd0a1e098fe22018ee3f83e6948be1474ea0a4bddcc64df9cb6a3ea2735 (image=quay.io/ceph/ceph:v19, name=clever_lalande, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:28 compute-0 systemd[1]: Started libpod-conmon-e8d58dd0a1e098fe22018ee3f83e6948be1474ea0a4bddcc64df9cb6a3ea2735.scope.
Oct  9 09:35:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b21ece4883a77d2aedf2ab69eb7d1c04ab694722003f793afd4c736b5c88b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b21ece4883a77d2aedf2ab69eb7d1c04ab694722003f793afd4c736b5c88b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b21ece4883a77d2aedf2ab69eb7d1c04ab694722003f793afd4c736b5c88b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:28 compute-0 podman[17149]: 2025-10-09 09:35:28.909528781 +0000 UTC m=+0.070889235 container init e8d58dd0a1e098fe22018ee3f83e6948be1474ea0a4bddcc64df9cb6a3ea2735 (image=quay.io/ceph/ceph:v19, name=clever_lalande, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 09:35:28 compute-0 podman[17149]: 2025-10-09 09:35:28.913996017 +0000 UTC m=+0.075356451 container start e8d58dd0a1e098fe22018ee3f83e6948be1474ea0a4bddcc64df9cb6a3ea2735 (image=quay.io/ceph/ceph:v19, name=clever_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:35:28 compute-0 podman[17149]: 2025-10-09 09:35:28.916177402 +0000 UTC m=+0.077537845 container attach e8d58dd0a1e098fe22018ee3f83e6948be1474ea0a4bddcc64df9cb6a3ea2735 (image=quay.io/ceph/ceph:v19, name=clever_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:28 compute-0 podman[17149]: 2025-10-09 09:35:28.856132406 +0000 UTC m=+0.017492839 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:29 compute-0 brave_jang[17144]: {
Oct  9 09:35:29 compute-0 brave_jang[17144]:    "1": [
Oct  9 09:35:29 compute-0 brave_jang[17144]:        {
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "devices": [
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "/dev/loop3"
Oct  9 09:35:29 compute-0 brave_jang[17144]:            ],
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "lv_name": "ceph_lv0",
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "lv_size": "21470642176",
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "name": "ceph_lv0",
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "tags": {
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.cluster_name": "ceph",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.crush_device_class": "",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.encrypted": "0",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.osd_id": "1",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.type": "block",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.vdo": "0",
Oct  9 09:35:29 compute-0 brave_jang[17144]:                "ceph.with_tpm": "0"
Oct  9 09:35:29 compute-0 brave_jang[17144]:            },
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "type": "block",
Oct  9 09:35:29 compute-0 brave_jang[17144]:            "vg_name": "ceph_vg0"
Oct  9 09:35:29 compute-0 brave_jang[17144]:        }
Oct  9 09:35:29 compute-0 brave_jang[17144]:    ]
Oct  9 09:35:29 compute-0 brave_jang[17144]: }
Oct  9 09:35:29 compute-0 systemd[1]: libpod-9aeeee65d3dd6203149e765b81ab47548d1a1fdede4bf5b49ac94991622b0f99.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 conmon[17144]: conmon 9aeeee65d3dd6203149e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9aeeee65d3dd6203149e765b81ab47548d1a1fdede4bf5b49ac94991622b0f99.scope/container/memory.events
Oct  9 09:35:29 compute-0 podman[17189]: 2025-10-09 09:35:29.096760211 +0000 UTC m=+0.017655811 container died 9aeeee65d3dd6203149e765b81ab47548d1a1fdede4bf5b49ac94991622b0f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jang, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 09:35:29 compute-0 podman[17189]: 2025-10-09 09:35:29.115965217 +0000 UTC m=+0.036860816 container remove 9aeeee65d3dd6203149e765b81ab47548d1a1fdede4bf5b49ac94991622b0f99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:29 compute-0 systemd[1]: libpod-conmon-9aeeee65d3dd6203149e765b81ab47548d1a1fdede4bf5b49ac94991622b0f99.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct  9 09:35:29 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 09:35:29 compute-0 podman[17284]: 2025-10-09 09:35:29.497526655 +0000 UTC m=+0.026661979 container create ed1f1309f04cdc58f9e052b2ce227614cdc920520446c39abc5b5cbecbe9f878 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_solomon, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:29 compute-0 systemd[1]: Started libpod-conmon-ed1f1309f04cdc58f9e052b2ce227614cdc920520446c39abc5b5cbecbe9f878.scope.
Oct  9 09:35:29 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:29 compute-0 podman[17284]: 2025-10-09 09:35:29.542496588 +0000 UTC m=+0.071631922 container init ed1f1309f04cdc58f9e052b2ce227614cdc920520446c39abc5b5cbecbe9f878 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_solomon, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 09:35:29 compute-0 podman[17284]: 2025-10-09 09:35:29.546456506 +0000 UTC m=+0.075591829 container start ed1f1309f04cdc58f9e052b2ce227614cdc920520446c39abc5b5cbecbe9f878 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_solomon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:35:29 compute-0 podman[17284]: 2025-10-09 09:35:29.547550784 +0000 UTC m=+0.076686108 container attach ed1f1309f04cdc58f9e052b2ce227614cdc920520446c39abc5b5cbecbe9f878 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_solomon, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:29 compute-0 stoic_solomon[17298]: 167 167
Oct  9 09:35:29 compute-0 systemd[1]: libpod-ed1f1309f04cdc58f9e052b2ce227614cdc920520446c39abc5b5cbecbe9f878.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 conmon[17298]: conmon ed1f1309f04cdc58f9e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed1f1309f04cdc58f9e052b2ce227614cdc920520446c39abc5b5cbecbe9f878.scope/container/memory.events
Oct  9 09:35:29 compute-0 podman[17284]: 2025-10-09 09:35:29.549802291 +0000 UTC m=+0.078937615 container died ed1f1309f04cdc58f9e052b2ce227614cdc920520446c39abc5b5cbecbe9f878 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_solomon, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:29 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 09:35:29 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 09:35:29 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  1: '-n'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  2: 'mgr.compute-0.lwqgfy'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  3: '-f'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  4: '--setuser'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  5: 'ceph'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  6: '--setgroup'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  7: 'ceph'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr respawn  exe_path /proc/self/exe
Oct  9 09:35:29 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.lwqgfy(active, since 93s)
Oct  9 09:35:29 compute-0 podman[17284]: 2025-10-09 09:35:29.57783245 +0000 UTC m=+0.106967775 container remove ed1f1309f04cdc58f9e052b2ce227614cdc920520446c39abc5b5cbecbe9f878 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_solomon, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:29 compute-0 podman[17284]: 2025-10-09 09:35:29.486339512 +0000 UTC m=+0.015474847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b2b74db2b2fae54f63a42cb351749192edf41ff0e5903fe2c1a55b88c59e7f1-merged.mount: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd[1]: libpod-e8d58dd0a1e098fe22018ee3f83e6948be1474ea0a4bddcc64df9cb6a3ea2735.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 podman[17149]: 2025-10-09 09:35:29.586952639 +0000 UTC m=+0.748313071 container died e8d58dd0a1e098fe22018ee3f83e6948be1474ea0a4bddcc64df9cb6a3ea2735 (image=quay.io/ceph/ceph:v19, name=clever_lalande, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:35:29 compute-0 systemd[1]: libpod-conmon-ed1f1309f04cdc58f9e052b2ce227614cdc920520446c39abc5b5cbecbe9f878.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-014b21ece4883a77d2aedf2ab69eb7d1c04ab694722003f793afd4c736b5c88b-merged.mount: Deactivated successfully.
Oct  9 09:35:29 compute-0 podman[17149]: 2025-10-09 09:35:29.626794332 +0000 UTC m=+0.788154765 container remove e8d58dd0a1e098fe22018ee3f83e6948be1474ea0a4bddcc64df9cb6a3ea2735 (image=quay.io/ceph/ceph:v19, name=clever_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd[1]: libpod-conmon-e8d58dd0a1e098fe22018ee3f83e6948be1474ea0a4bddcc64df9cb6a3ea2735.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ignoring --setuser ceph since I am not root
Oct  9 09:35:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ignoring --setgroup ceph since I am not root
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 12.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 13.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 17.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 10.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 14.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 8.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 6.
Oct  9 09:35:29 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 9.
Oct  9 09:35:29 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 15.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 16.
Oct  9 09:35:29 compute-0 systemd-logind[798]: Removed session 11.
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:29 compute-0 podman[17351]: 2025-10-09 09:35:29.732324201 +0000 UTC m=+0.029395848 container create 83f49dbf640181be81094c1d3b9d04b9713bf595d50910f41cf738d8c8c89772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:29 compute-0 systemd[1]: Started libpod-conmon-83f49dbf640181be81094c1d3b9d04b9713bf595d50910f41cf738d8c8c89772.scope.
Oct  9 09:35:29 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef135a1f23d9473099a7ae330c174cdb44d27fb9295b3ebdf82cd43cbdb207a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef135a1f23d9473099a7ae330c174cdb44d27fb9295b3ebdf82cd43cbdb207a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef135a1f23d9473099a7ae330c174cdb44d27fb9295b3ebdf82cd43cbdb207a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef135a1f23d9473099a7ae330c174cdb44d27fb9295b3ebdf82cd43cbdb207a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:29 compute-0 podman[17351]: 2025-10-09 09:35:29.787332775 +0000 UTC m=+0.084404441 container init 83f49dbf640181be81094c1d3b9d04b9713bf595d50910f41cf738d8c8c89772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Oct  9 09:35:29 compute-0 podman[17351]: 2025-10-09 09:35:29.795890879 +0000 UTC m=+0.092962525 container start 83f49dbf640181be81094c1d3b9d04b9713bf595d50910f41cf738d8c8c89772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_stonebraker, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 09:35:29 compute-0 podman[17351]: 2025-10-09 09:35:29.801167931 +0000 UTC m=+0.098239577 container attach 83f49dbf640181be81094c1d3b9d04b9713bf595d50910f41cf738d8c8c89772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_stonebraker, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:29.806+0000 7fccf745c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-0 podman[17351]: 2025-10-09 09:35:29.720687721 +0000 UTC m=+0.017759388 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:29.881+0000 7fccf745c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-0 python3[17395]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:30 compute-0 podman[17421]: 2025-10-09 09:35:30.095700331 +0000 UTC m=+0.052451833 container create 1d8a7c45bb12ed0f45e1efb781fb43f6e070a4f01da0a7d4d230f2882cb73ffd (image=quay.io/ceph/ceph:v19, name=brave_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:35:30 compute-0 systemd[1]: Started libpod-conmon-1d8a7c45bb12ed0f45e1efb781fb43f6e070a4f01da0a7d4d230f2882cb73ffd.scope.
Oct  9 09:35:30 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871d3211f43ec35763e819fb8c4c9a71bbaadefb14d35bb8c061dba15ca8365f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871d3211f43ec35763e819fb8c4c9a71bbaadefb14d35bb8c061dba15ca8365f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871d3211f43ec35763e819fb8c4c9a71bbaadefb14d35bb8c061dba15ca8365f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:30 compute-0 podman[17421]: 2025-10-09 09:35:30.073768791 +0000 UTC m=+0.030520314 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:30 compute-0 podman[17421]: 2025-10-09 09:35:30.162250262 +0000 UTC m=+0.119001783 container init 1d8a7c45bb12ed0f45e1efb781fb43f6e070a4f01da0a7d4d230f2882cb73ffd (image=quay.io/ceph/ceph:v19, name=brave_jemison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:30 compute-0 podman[17421]: 2025-10-09 09:35:30.168200118 +0000 UTC m=+0.124951620 container start 1d8a7c45bb12ed0f45e1efb781fb43f6e070a4f01da0a7d4d230f2882cb73ffd (image=quay.io/ceph/ceph:v19, name=brave_jemison, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:30 compute-0 podman[17421]: 2025-10-09 09:35:30.169465704 +0000 UTC m=+0.126217216 container attach 1d8a7c45bb12ed0f45e1efb781fb43f6e070a4f01da0a7d4d230f2882cb73ffd (image=quay.io/ceph/ceph:v19, name=brave_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:35:30 compute-0 lvm[17513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:35:30 compute-0 lvm[17513]: VG ceph_vg0 finished
Oct  9 09:35:30 compute-0 quirky_stonebraker[17365]: {}
Oct  9 09:35:30 compute-0 systemd[1]: libpod-83f49dbf640181be81094c1d3b9d04b9713bf595d50910f41cf738d8c8c89772.scope: Deactivated successfully.
Oct  9 09:35:30 compute-0 podman[17351]: 2025-10-09 09:35:30.328245149 +0000 UTC m=+0.625316794 container died 83f49dbf640181be81094c1d3b9d04b9713bf595d50910f41cf738d8c8c89772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_stonebraker, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef135a1f23d9473099a7ae330c174cdb44d27fb9295b3ebdf82cd43cbdb207a0-merged.mount: Deactivated successfully.
Oct  9 09:35:30 compute-0 podman[17351]: 2025-10-09 09:35:30.350526385 +0000 UTC m=+0.647598032 container remove 83f49dbf640181be81094c1d3b9d04b9713bf595d50910f41cf738d8c8c89772 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_stonebraker, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:35:30 compute-0 systemd[1]: libpod-conmon-83f49dbf640181be81094c1d3b9d04b9713bf595d50910f41cf738d8c8c89772.scope: Deactivated successfully.
Oct  9 09:35:30 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct  9 09:35:30 compute-0 systemd[1]: session-18.scope: Consumed 13.065s CPU time.
Oct  9 09:35:30 compute-0 systemd-logind[798]: Removed session 18.
Oct  9 09:35:30 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'crash'
Oct  9 09:35:30 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 09:35:30 compute-0 ceph-mgr[4772]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:30.608+0000 7fccf745c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.takdnm started
Oct  9 09:35:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:31.157+0000 7fccf745c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:31 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.etokpp started
Oct  9 09:35:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:31.301+0000 7fccf745c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'influx'
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:31.363+0000 7fccf745c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'insights'
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:31.482+0000 7fccf745c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:31 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.lwqgfy(active, since 95s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:31 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:32.347+0000 7fccf745c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:32.535+0000 7fccf745c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:32.602+0000 7fccf745c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:32.661+0000 7fccf745c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:32.730+0000 7fccf745c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'progress'
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:32.794+0000 7fccf745c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:33 compute-0 ceph-mgr[4772]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:33.093+0000 7fccf745c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:33 compute-0 ceph-mgr[4772]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:33.178+0000 7fccf745c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'restful'
Oct  9 09:35:33 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:33 compute-0 ceph-mgr[4772]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:33.555+0000 7fccf745c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rook'
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:34.044+0000 7fccf745c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:34.107+0000 7fccf745c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:34.178+0000 7fccf745c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'stats'
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'status'
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:34.310+0000 7fccf745c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:34.371+0000 7fccf745c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:34.504+0000 7fccf745c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:34.740+0000 7fccf745c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:34.968+0000 7fccf745c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:35.029+0000 7fccf745c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: ms_deliver_dispatch: unhandled message 0x55d0bf93ed00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.lwqgfy(active, starting, since 0.015525s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr handle_mgr_map Activating!
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr handle_mgr_map I am now activating
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e1 all = 1
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load_all_metadata Skipping incomplete metadata entry
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Manager daemon compute-0.lwqgfy is now available
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: balancer
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [balancer INFO root] Starting
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:35:35
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: cephadm
Oct  9 09:35:35 compute-0 ceph-mon[4497]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: crash
Oct  9 09:35:35 compute-0 ceph-mon[4497]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: dashboard
Oct  9 09:35:35 compute-0 ceph-mon[4497]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO access_control] Loading user roles DB version=2
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO sso] Loading SSO DB version=1
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: devicehealth
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: iostat
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: nfs
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Starting
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: orchestrator
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: pg_autoscaler
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: progress
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [progress INFO root] Loading...
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fcc9affcb80>, <progress.module.GhostEvent object at 0x7fcc9affcdf0>, <progress.module.GhostEvent object at 0x7fcc9affce20>, <progress.module.GhostEvent object at 0x7fcc9affce50>] historic events
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [progress INFO root] Loaded OSDMap, ready.
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] recovery thread starting
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] starting setup
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: rbd_support
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: restful
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [restful INFO root] server_addr: :: server_port: 8003
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [restful WARNING root] server not running: no certificate configured
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: status
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: telemetry
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] PerfHandler: starting
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: images, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TaskHandler: starting
Oct  9 09:35:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"} v 0)
Oct  9 09:35:35 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [rbd_support INFO root] setup complete
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: volumes
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct  9 09:35:35 compute-0 systemd-logind[798]: New session 19 of user ceph-admin.
Oct  9 09:35:35 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Oct  9 09:35:35 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.module] Engine started.
Oct  9 09:35:35 compute-0 podman[17765]: 2025-10-09 09:35:35.973528293 +0000 UTC m=+0.036650314 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:36 compute-0 podman[17765]: 2025-10-09 09:35:36.050392481 +0000 UTC m=+0.113514501 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.lwqgfy(active, since 1.02391s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:35:36 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14292 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.takdnm restarted
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.takdnm started
Oct  9 09:35:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Oct  9 09:35:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v3: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-0 brave_jemison[17455]: Option GRAFANA_API_USERNAME updated
Oct  9 09:35:36 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:35:36 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:35:36 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-0 systemd[1]: libpod-1d8a7c45bb12ed0f45e1efb781fb43f6e070a4f01da0a7d4d230f2882cb73ffd.scope: Deactivated successfully.
Oct  9 09:35:36 compute-0 podman[17421]: 2025-10-09 09:35:36.101107619 +0000 UTC m=+6.057859120 container died 1d8a7c45bb12ed0f45e1efb781fb43f6e070a4f01da0a7d4d230f2882cb73ffd (image=quay.io/ceph/ceph:v19, name=brave_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 09:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-871d3211f43ec35763e819fb8c4c9a71bbaadefb14d35bb8c061dba15ca8365f-merged.mount: Deactivated successfully.
Oct  9 09:35:36 compute-0 podman[17421]: 2025-10-09 09:35:36.133111572 +0000 UTC m=+6.089863074 container remove 1d8a7c45bb12ed0f45e1efb781fb43f6e070a4f01da0a7d4d230f2882cb73ffd (image=quay.io/ceph/ceph:v19, name=brave_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 09:35:36 compute-0 systemd[1]: libpod-conmon-1d8a7c45bb12ed0f45e1efb781fb43f6e070a4f01da0a7d4d230f2882cb73ffd.scope: Deactivated successfully.
Oct  9 09:35:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:35:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-0 python3[17871]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Oct  9 09:35:36 compute-0 podman[17920]: 2025-10-09 09:35:36.426625959 +0000 UTC m=+0.030791531 container create 53db1627ca6ab1d72e63826c52288f63646c19632a424ce972bee29113193140 (image=quay.io/ceph/ceph:v19, name=lucid_panini, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:36 compute-0 systemd[1]: Started libpod-conmon-53db1627ca6ab1d72e63826c52288f63646c19632a424ce972bee29113193140.scope.
Oct  9 09:35:36 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e12d31f4cbf0cd136ec20f53d6b366b941cf6a5e2b61b05ddd5d092c3ad3736/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e12d31f4cbf0cd136ec20f53d6b366b941cf6a5e2b61b05ddd5d092c3ad3736/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e12d31f4cbf0cd136ec20f53d6b366b941cf6a5e2b61b05ddd5d092c3ad3736/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:36 compute-0 podman[17920]: 2025-10-09 09:35:36.487746968 +0000 UTC m=+0.091912561 container init 53db1627ca6ab1d72e63826c52288f63646c19632a424ce972bee29113193140 (image=quay.io/ceph/ceph:v19, name=lucid_panini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:36 compute-0 podman[17920]: 2025-10-09 09:35:36.492916425 +0000 UTC m=+0.097081997 container start 53db1627ca6ab1d72e63826c52288f63646c19632a424ce972bee29113193140 (image=quay.io/ceph/ceph:v19, name=lucid_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:35:36 compute-0 podman[17920]: 2025-10-09 09:35:36.49388881 +0000 UTC m=+0.098054383 container attach 53db1627ca6ab1d72e63826c52288f63646c19632a424ce972bee29113193140 (image=quay.io/ceph/ceph:v19, name=lucid_panini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:36 compute-0 podman[17920]: 2025-10-09 09:35:36.410308034 +0000 UTC m=+0.014473626 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.etokpp restarted
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.etokpp started
Oct  9 09:35:36 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:35:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Oct  9 09:35:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-0 lucid_panini[17935]: Option GRAFANA_API_PASSWORD updated
Oct  9 09:35:36 compute-0 systemd[1]: libpod-53db1627ca6ab1d72e63826c52288f63646c19632a424ce972bee29113193140.scope: Deactivated successfully.
Oct  9 09:35:36 compute-0 podman[17920]: 2025-10-09 09:35:36.80673203 +0000 UTC m=+0.410897602 container died 53db1627ca6ab1d72e63826c52288f63646c19632a424ce972bee29113193140 (image=quay.io/ceph/ceph:v19, name=lucid_panini, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 09:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e12d31f4cbf0cd136ec20f53d6b366b941cf6a5e2b61b05ddd5d092c3ad3736-merged.mount: Deactivated successfully.
Oct  9 09:35:36 compute-0 podman[17920]: 2025-10-09 09:35:36.830097387 +0000 UTC m=+0.434262959 container remove 53db1627ca6ab1d72e63826c52288f63646c19632a424ce972bee29113193140 (image=quay.io/ceph/ceph:v19, name=lucid_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:36 compute-0 systemd[1]: libpod-conmon-53db1627ca6ab1d72e63826c52288f63646c19632a424ce972bee29113193140.scope: Deactivated successfully.
Oct  9 09:35:36 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:35:36] ENGINE Bus STARTING
Oct  9 09:35:36 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:35:36] ENGINE Bus STARTING
Oct  9 09:35:36 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:35:36] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:35:36 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:35:36] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.lwqgfy(active, since 2s), standbys: compute-1.etokpp, compute-2.takdnm
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v4: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:37 compute-0 python3[18084]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:35:37] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:35:37] ENGINE Client ('192.168.122.100', 44370) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:35:37] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:35:37] ENGINE Bus STARTED
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:35:37] ENGINE Client ('192.168.122.100', 44370) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:35:37] ENGINE Bus STARTED
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Check health
Oct  9 09:35:37 compute-0 podman[18107]: 2025-10-09 09:35:37.150826497 +0000 UTC m=+0.033914822 container create 491ba9e9ee069b6db22b9efe2281bab7fdfd29996010b1945e768bd53ae5718a (image=quay.io/ceph/ceph:v19, name=bold_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.5M
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.5M
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.5M
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.5M
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:37 compute-0 systemd[1]: Started libpod-conmon-491ba9e9ee069b6db22b9efe2281bab7fdfd29996010b1945e768bd53ae5718a.scope.
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:37 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4ce81c32070d29fb87093f40795462ebb57da97d194d47d0740357480a2c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4ce81c32070d29fb87093f40795462ebb57da97d194d47d0740357480a2c4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4ce81c32070d29fb87093f40795462ebb57da97d194d47d0740357480a2c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:37 compute-0 podman[18107]: 2025-10-09 09:35:37.209375987 +0000 UTC m=+0.092464323 container init 491ba9e9ee069b6db22b9efe2281bab7fdfd29996010b1945e768bd53ae5718a (image=quay.io/ceph/ceph:v19, name=bold_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:37 compute-0 podman[18107]: 2025-10-09 09:35:37.215037694 +0000 UTC m=+0.098126009 container start 491ba9e9ee069b6db22b9efe2281bab7fdfd29996010b1945e768bd53ae5718a (image=quay.io/ceph/ceph:v19, name=bold_spence, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:37 compute-0 podman[18107]: 2025-10-09 09:35:37.216314701 +0000 UTC m=+0.099403016 container attach 491ba9e9ee069b6db22b9efe2281bab7fdfd29996010b1945e768bd53ae5718a (image=quay.io/ceph/ceph:v19, name=bold_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:37 compute-0 podman[18107]: 2025-10-09 09:35:37.137249102 +0000 UTC m=+0.020337427 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14328 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:35:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Oct  9 09:35:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-0 bold_spence[18136]: Option ALERTMANAGER_API_HOST updated
Oct  9 09:35:37 compute-0 systemd[1]: libpod-491ba9e9ee069b6db22b9efe2281bab7fdfd29996010b1945e768bd53ae5718a.scope: Deactivated successfully.
Oct  9 09:35:37 compute-0 podman[18107]: 2025-10-09 09:35:37.522087625 +0000 UTC m=+0.405175941 container died 491ba9e9ee069b6db22b9efe2281bab7fdfd29996010b1945e768bd53ae5718a (image=quay.io/ceph/ceph:v19, name=bold_spence, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-98d4ce81c32070d29fb87093f40795462ebb57da97d194d47d0740357480a2c4-merged.mount: Deactivated successfully.
Oct  9 09:35:37 compute-0 podman[18107]: 2025-10-09 09:35:37.551026519 +0000 UTC m=+0.434114835 container remove 491ba9e9ee069b6db22b9efe2281bab7fdfd29996010b1945e768bd53ae5718a (image=quay.io/ceph/ceph:v19, name=bold_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:37 compute-0 systemd[1]: libpod-conmon-491ba9e9ee069b6db22b9efe2281bab7fdfd29996010b1945e768bd53ae5718a.scope: Deactivated successfully.
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:37 compute-0 python3[18468]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:37 compute-0 podman[18539]: 2025-10-09 09:35:37.856410532 +0000 UTC m=+0.033144874 container create 666cb683e35f4bc34d3821de465c92219e67508d75b2c8a2dee2eb3eece25179 (image=quay.io/ceph/ceph:v19, name=peaceful_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:37 compute-0 systemd[1]: Started libpod-conmon-666cb683e35f4bc34d3821de465c92219e67508d75b2c8a2dee2eb3eece25179.scope.
Oct  9 09:35:37 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6de50afb24ce11f3c234620d8e903088067e0907318aeb977fcd80ccb506d8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6de50afb24ce11f3c234620d8e903088067e0907318aeb977fcd80ccb506d8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6de50afb24ce11f3c234620d8e903088067e0907318aeb977fcd80ccb506d8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:37 compute-0 podman[18539]: 2025-10-09 09:35:37.902417694 +0000 UTC m=+0.079152046 container init 666cb683e35f4bc34d3821de465c92219e67508d75b2c8a2dee2eb3eece25179 (image=quay.io/ceph/ceph:v19, name=peaceful_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 09:35:37 compute-0 podman[18539]: 2025-10-09 09:35:37.908190973 +0000 UTC m=+0.084925315 container start 666cb683e35f4bc34d3821de465c92219e67508d75b2c8a2dee2eb3eece25179 (image=quay.io/ceph/ceph:v19, name=peaceful_wiles, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:37 compute-0 podman[18539]: 2025-10-09 09:35:37.909521391 +0000 UTC m=+0.086255734 container attach 666cb683e35f4bc34d3821de465c92219e67508d75b2c8a2dee2eb3eece25179 (image=quay.io/ceph/ceph:v19, name=peaceful_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 09:35:37 compute-0 podman[18539]: 2025-10-09 09:35:37.844942854 +0000 UTC m=+0.021677216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14334 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:35:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Oct  9 09:35:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 peaceful_wiles[18587]: Option PROMETHEUS_API_HOST updated
Oct  9 09:35:38 compute-0 systemd[1]: libpod-666cb683e35f4bc34d3821de465c92219e67508d75b2c8a2dee2eb3eece25179.scope: Deactivated successfully.
Oct  9 09:35:38 compute-0 podman[18539]: 2025-10-09 09:35:38.206222371 +0000 UTC m=+0.382956723 container died 666cb683e35f4bc34d3821de465c92219e67508d75b2c8a2dee2eb3eece25179 (image=quay.io/ceph/ceph:v19, name=peaceful_wiles, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:35:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed6de50afb24ce11f3c234620d8e903088067e0907318aeb977fcd80ccb506d8-merged.mount: Deactivated successfully.
Oct  9 09:35:38 compute-0 podman[18539]: 2025-10-09 09:35:38.227619271 +0000 UTC m=+0.404353612 container remove 666cb683e35f4bc34d3821de465c92219e67508d75b2c8a2dee2eb3eece25179 (image=quay.io/ceph/ceph:v19, name=peaceful_wiles, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:38 compute-0 systemd[1]: libpod-conmon-666cb683e35f4bc34d3821de465c92219e67508d75b2c8a2dee2eb3eece25179.scope: Deactivated successfully.
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 python3[18913]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:38 compute-0 ceph-mon[4497]: [09/Oct/2025:09:35:36] ENGINE Bus STARTING
Oct  9 09:35:38 compute-0 ceph-mon[4497]: [09/Oct/2025:09:35:36] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:35:38 compute-0 ceph-mon[4497]: [09/Oct/2025:09:35:37] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:35:38 compute-0 ceph-mon[4497]: [09/Oct/2025:09:35:37] ENGINE Client ('192.168.122.100', 44370) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:35:38 compute-0 ceph-mon[4497]: [09/Oct/2025:09:35:37] ENGINE Bus STARTED
Oct  9 09:35:38 compute-0 ceph-mon[4497]: Adjusting osd_memory_target on compute-0 to 128.5M
Oct  9 09:35:38 compute-0 ceph-mon[4497]: Adjusting osd_memory_target on compute-1 to 128.5M
Oct  9 09:35:38 compute-0 ceph-mon[4497]: Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:38 compute-0 ceph-mon[4497]: Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:38 compute-0 ceph-mon[4497]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:35:38 compute-0 ceph-mon[4497]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:35:38 compute-0 ceph-mon[4497]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:38 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 ceph-mon[4497]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:38 compute-0 ceph-mon[4497]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:38 compute-0 ceph-mon[4497]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:38 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 podman[18988]: 2025-10-09 09:35:38.513502673 +0000 UTC m=+0.034662740 container create b4863d77098acd2218190c27d80409ae50bc4f0a1b4a4f4a9c17c352b326d2ea (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  9 09:35:38 compute-0 systemd[1]: Started libpod-conmon-b4863d77098acd2218190c27d80409ae50bc4f0a1b4a4f4a9c17c352b326d2ea.scope.
Oct  9 09:35:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc33195d217aac5db30ffc57cf04984a359c0b93c3eba310a0ccaea0d62d9660/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc33195d217aac5db30ffc57cf04984a359c0b93c3eba310a0ccaea0d62d9660/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc33195d217aac5db30ffc57cf04984a359c0b93c3eba310a0ccaea0d62d9660/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:38 compute-0 podman[18988]: 2025-10-09 09:35:38.57049454 +0000 UTC m=+0.091654617 container init b4863d77098acd2218190c27d80409ae50bc4f0a1b4a4f4a9c17c352b326d2ea (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 09:35:38 compute-0 podman[18988]: 2025-10-09 09:35:38.575058822 +0000 UTC m=+0.096218889 container start b4863d77098acd2218190c27d80409ae50bc4f0a1b4a4f4a9c17c352b326d2ea (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:38 compute-0 podman[18988]: 2025-10-09 09:35:38.576375375 +0000 UTC m=+0.097535442 container attach b4863d77098acd2218190c27d80409ae50bc4f0a1b4a4f4a9c17c352b326d2ea (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 09:35:38 compute-0 podman[18988]: 2025-10-09 09:35:38.502482438 +0000 UTC m=+0.023642526 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:35:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:35:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:35:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:35:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:35:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct  9 09:35:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 focused_ptolemy[19026]: Option GRAFANA_API_URL updated
Oct  9 09:35:38 compute-0 systemd[1]: libpod-b4863d77098acd2218190c27d80409ae50bc4f0a1b4a4f4a9c17c352b326d2ea.scope: Deactivated successfully.
Oct  9 09:35:38 compute-0 podman[18988]: 2025-10-09 09:35:38.875437251 +0000 UTC m=+0.396597328 container died b4863d77098acd2218190c27d80409ae50bc4f0a1b4a4f4a9c17c352b326d2ea (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:35:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc33195d217aac5db30ffc57cf04984a359c0b93c3eba310a0ccaea0d62d9660-merged.mount: Deactivated successfully.
Oct  9 09:35:38 compute-0 podman[18988]: 2025-10-09 09:35:38.895028042 +0000 UTC m=+0.416188109 container remove b4863d77098acd2218190c27d80409ae50bc4f0a1b4a4f4a9c17c352b326d2ea (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 09:35:38 compute-0 systemd[1]: libpod-conmon-b4863d77098acd2218190c27d80409ae50bc4f0a1b4a4f4a9c17c352b326d2ea.scope: Deactivated successfully.
Oct  9 09:35:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:35:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 0958e344-f326-4f49-a18c-5e9c6bddfd6c (Updating node-exporter deployment (+3 -> 3))
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Oct  9 09:35:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v5: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:39 compute-0 python3[19259]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:39 compute-0 podman[19285]: 2025-10-09 09:35:39.165758593 +0000 UTC m=+0.026557350 container create c8af26dc446fdd5e8521cafd70f5faef7a1635c9b2a1c00df355e592f69f3467 (image=quay.io/ceph/ceph:v19, name=brave_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:39 compute-0 systemd[1]: Started libpod-conmon-c8af26dc446fdd5e8521cafd70f5faef7a1635c9b2a1c00df355e592f69f3467.scope.
Oct  9 09:35:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a172290db9de000389d4c787784ab10063fdd3bf75e8615c50fe2bf6ac4f33af/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a172290db9de000389d4c787784ab10063fdd3bf75e8615c50fe2bf6ac4f33af/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a172290db9de000389d4c787784ab10063fdd3bf75e8615c50fe2bf6ac4f33af/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:39 compute-0 podman[19285]: 2025-10-09 09:35:39.222673153 +0000 UTC m=+0.083471930 container init c8af26dc446fdd5e8521cafd70f5faef7a1635c9b2a1c00df355e592f69f3467 (image=quay.io/ceph/ceph:v19, name=brave_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  9 09:35:39 compute-0 podman[19285]: 2025-10-09 09:35:39.227555542 +0000 UTC m=+0.088354299 container start c8af26dc446fdd5e8521cafd70f5faef7a1635c9b2a1c00df355e592f69f3467 (image=quay.io/ceph/ceph:v19, name=brave_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 09:35:39 compute-0 podman[19285]: 2025-10-09 09:35:39.230580606 +0000 UTC m=+0.091379383 container attach c8af26dc446fdd5e8521cafd70f5faef7a1635c9b2a1c00df355e592f69f3467 (image=quay.io/ceph/ceph:v19, name=brave_hertz, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:39 compute-0 podman[19285]: 2025-10-09 09:35:39.155339847 +0000 UTC m=+0.016138624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:39 compute-0 systemd[1]: Reloading.
Oct  9 09:35:39 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:39 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:39 compute-0 ceph-mon[4497]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:39 compute-0 ceph-mon[4497]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:39 compute-0 ceph-mon[4497]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:39 compute-0 ceph-mon[4497]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:39 compute-0 ceph-mon[4497]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:39 compute-0 ceph-mon[4497]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:39 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-0 ceph-mon[4497]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Oct  9 09:35:39 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 09:35:39 compute-0 systemd[1]: Reloading.
Oct  9 09:35:39 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:39 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:39 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:35:39 compute-0 bash[19472]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct  9 09:35:39 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  1: '-n'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  2: 'mgr.compute-0.lwqgfy'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  3: '-f'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  4: '--setuser'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  5: 'ceph'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  6: '--setgroup'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  7: 'ceph'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 09:35:39 compute-0 ceph-mgr[4772]: mgr respawn  exe_path /proc/self/exe
Oct  9 09:35:39 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.lwqgfy(active, since 4s), standbys: compute-1.etokpp, compute-2.takdnm
Oct  9 09:35:40 compute-0 systemd[1]: libpod-c8af26dc446fdd5e8521cafd70f5faef7a1635c9b2a1c00df355e592f69f3467.scope: Deactivated successfully.
Oct  9 09:35:40 compute-0 podman[19285]: 2025-10-09 09:35:40.002246557 +0000 UTC m=+0.863045315 container died c8af26dc446fdd5e8521cafd70f5faef7a1635c9b2a1c00df355e592f69f3467 (image=quay.io/ceph/ceph:v19, name=brave_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a172290db9de000389d4c787784ab10063fdd3bf75e8615c50fe2bf6ac4f33af-merged.mount: Deactivated successfully.
Oct  9 09:35:40 compute-0 podman[19285]: 2025-10-09 09:35:40.024696557 +0000 UTC m=+0.885495314 container remove c8af26dc446fdd5e8521cafd70f5faef7a1635c9b2a1c00df355e592f69f3467 (image=quay.io/ceph/ceph:v19, name=brave_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:35:40 compute-0 systemd[1]: libpod-conmon-c8af26dc446fdd5e8521cafd70f5faef7a1635c9b2a1c00df355e592f69f3467.scope: Deactivated successfully.
Oct  9 09:35:40 compute-0 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Oct  9 09:35:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ignoring --setuser ceph since I am not root
Oct  9 09:35:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ignoring --setgroup ceph since I am not root
Oct  9 09:35:40 compute-0 ceph-mgr[4772]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:40 compute-0 ceph-mgr[4772]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:40 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:40 compute-0 ceph-mgr[4772]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:40.185+0000 7fbd02599140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-0 ceph-mgr[4772]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:40.255+0000 7fbd02599140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-0 python3[19537]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:40 compute-0 podman[19538]: 2025-10-09 09:35:40.302019901 +0000 UTC m=+0.029341784 container create da2657fe86987e39de731677c08e3a4684648d6e0309093c1b7db7f05efcb293 (image=quay.io/ceph/ceph:v19, name=boring_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:40 compute-0 systemd[1]: Started libpod-conmon-da2657fe86987e39de731677c08e3a4684648d6e0309093c1b7db7f05efcb293.scope.
Oct  9 09:35:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4f77e6ef0c524534d2b35b49c41553b3acda0b3935af911a998960bda9e0e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4f77e6ef0c524534d2b35b49c41553b3acda0b3935af911a998960bda9e0e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b4f77e6ef0c524534d2b35b49c41553b3acda0b3935af911a998960bda9e0e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:40 compute-0 podman[19538]: 2025-10-09 09:35:40.353717575 +0000 UTC m=+0.081039447 container init da2657fe86987e39de731677c08e3a4684648d6e0309093c1b7db7f05efcb293 (image=quay.io/ceph/ceph:v19, name=boring_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:40 compute-0 podman[19538]: 2025-10-09 09:35:40.362188382 +0000 UTC m=+0.089510254 container start da2657fe86987e39de731677c08e3a4684648d6e0309093c1b7db7f05efcb293 (image=quay.io/ceph/ceph:v19, name=boring_joliot, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:40 compute-0 podman[19538]: 2025-10-09 09:35:40.366868796 +0000 UTC m=+0.094190668 container attach da2657fe86987e39de731677c08e3a4684648d6e0309093c1b7db7f05efcb293 (image=quay.io/ceph/ceph:v19, name=boring_joliot, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:40 compute-0 podman[19538]: 2025-10-09 09:35:40.291065603 +0000 UTC m=+0.018387487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:40 compute-0 bash[19472]: Getting image source signatures
Oct  9 09:35:40 compute-0 bash[19472]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct  9 09:35:40 compute-0 bash[19472]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct  9 09:35:40 compute-0 bash[19472]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct  9 09:35:40 compute-0 ceph-mon[4497]: Deploying daemon node-exporter.compute-0 on compute-0
Oct  9 09:35:40 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 09:35:40 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 09:35:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Oct  9 09:35:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 09:35:40 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'crash'
Oct  9 09:35:40 compute-0 ceph-mgr[4772]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:40.938+0000 7fbd02599140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:41 compute-0 bash[19472]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct  9 09:35:41 compute-0 bash[19472]: Writing manifest to image destination
Oct  9 09:35:41 compute-0 podman[19472]: 2025-10-09 09:35:41.028782719 +0000 UTC m=+1.179186768 container create f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9ed33f48d992ec68091a556f7859416fcc77245186887b2f1750ed0d73c246/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:41 compute-0 podman[19472]: 2025-10-09 09:35:41.060968891 +0000 UTC m=+1.211372951 container init f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:41 compute-0 podman[19472]: 2025-10-09 09:35:41.064964266 +0000 UTC m=+1.215368317 container start f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:41 compute-0 bash[19472]: f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b
Oct  9 09:35:41 compute-0 podman[19472]: 2025-10-09 09:35:41.019773433 +0000 UTC m=+1.170177503 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.068Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.068Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.069Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.069Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.070Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.070Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.070Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.070Z caller=node_exporter.go:117 level=info collector=arp
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.070Z caller=node_exporter.go:117 level=info collector=bcache
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.070Z caller=node_exporter.go:117 level=info collector=bonding
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.070Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.070Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.070Z caller=node_exporter.go:117 level=info collector=cpu
Oct  9 09:35:41 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=dmi
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=edac
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=entropy
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=filefd
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=hwmon
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=netclass
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=netdev
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=netstat
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=nfs
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=nvme
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=os
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=pressure
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=rapl
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=selinux
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=softnet
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=stat
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=textfile
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=time
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=uname
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=xfs
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.072Z caller=node_exporter.go:117 level=info collector=zfs
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.073Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[19638]: ts=2025-10-09T09:35:41.073Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct  9 09:35:41 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct  9 09:35:41 compute-0 systemd[1]: session-19.scope: Consumed 3.455s CPU time.
Oct  9 09:35:41 compute-0 systemd-logind[798]: Removed session 19.
Oct  9 09:35:41 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:41 compute-0 ceph-mgr[4772]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:41.492+0000 7fbd02599140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:41 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 09:35:41 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 09:35:41 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.lwqgfy(active, since 6s), standbys: compute-1.etokpp, compute-2.takdnm
Oct  9 09:35:41 compute-0 systemd[1]: libpod-da2657fe86987e39de731677c08e3a4684648d6e0309093c1b7db7f05efcb293.scope: Deactivated successfully.
Oct  9 09:35:41 compute-0 podman[19538]: 2025-10-09 09:35:41.548007087 +0000 UTC m=+1.275328960 container died da2657fe86987e39de731677c08e3a4684648d6e0309093c1b7db7f05efcb293 (image=quay.io/ceph/ceph:v19, name=boring_joliot, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  9 09:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2b4f77e6ef0c524534d2b35b49c41553b3acda0b3935af911a998960bda9e0e-merged.mount: Deactivated successfully.
Oct  9 09:35:41 compute-0 podman[19538]: 2025-10-09 09:35:41.571432158 +0000 UTC m=+1.298754031 container remove da2657fe86987e39de731677c08e3a4684648d6e0309093c1b7db7f05efcb293 (image=quay.io/ceph/ceph:v19, name=boring_joliot, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:35:41 compute-0 systemd[1]: libpod-conmon-da2657fe86987e39de731677c08e3a4684648d6e0309093c1b7db7f05efcb293.scope: Deactivated successfully.
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:41.641+0000 7fbd02599140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-0 ceph-mgr[4772]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'influx'
Oct  9 09:35:41 compute-0 ceph-mgr[4772]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'insights'
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:41.704+0000 7fbd02599140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:41 compute-0 ceph-mgr[4772]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:41.824+0000 7fbd02599140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:42 compute-0 python3[19732]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 09:35:42 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:42 compute-0 python3[19803]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002541.9695454-34347-224475162234380/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:35:42 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:42 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:42 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 09:35:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:42.722+0000 7fbd02599140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-0 ceph-mgr[4772]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:42 compute-0 python3[19853]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:42 compute-0 podman[19854]: 2025-10-09 09:35:42.814801999 +0000 UTC m=+0.028251122 container create db292a3c811c1b292e4bba4506ec1f255dfb22bc40d351568d7ba5a55c9a0449 (image=quay.io/ceph/ceph:v19, name=focused_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:42 compute-0 systemd[1]: Started libpod-conmon-db292a3c811c1b292e4bba4506ec1f255dfb22bc40d351568d7ba5a55c9a0449.scope.
Oct  9 09:35:42 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5aa565d3fa1194e642c96f08d5c10388294c3ee15308314065941b490698416/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5aa565d3fa1194e642c96f08d5c10388294c3ee15308314065941b490698416/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5aa565d3fa1194e642c96f08d5c10388294c3ee15308314065941b490698416/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:42 compute-0 podman[19854]: 2025-10-09 09:35:42.877086379 +0000 UTC m=+0.090535512 container init db292a3c811c1b292e4bba4506ec1f255dfb22bc40d351568d7ba5a55c9a0449 (image=quay.io/ceph/ceph:v19, name=focused_robinson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:42 compute-0 podman[19854]: 2025-10-09 09:35:42.882811898 +0000 UTC m=+0.096261020 container start db292a3c811c1b292e4bba4506ec1f255dfb22bc40d351568d7ba5a55c9a0449 (image=quay.io/ceph/ceph:v19, name=focused_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Oct  9 09:35:42 compute-0 podman[19854]: 2025-10-09 09:35:42.884040832 +0000 UTC m=+0.097489955 container attach db292a3c811c1b292e4bba4506ec1f255dfb22bc40d351568d7ba5a55c9a0449 (image=quay.io/ceph/ceph:v19, name=focused_robinson, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:35:42 compute-0 podman[19854]: 2025-10-09 09:35:42.804022465 +0000 UTC m=+0.017471608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:42.920+0000 7fbd02599140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-0 ceph-mgr[4772]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:42.994+0000 7fbd02599140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-0 ceph-mgr[4772]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:43.056+0000 7fbd02599140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:43.125+0000 7fbd02599140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'progress'
Oct  9 09:35:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:43.188+0000 7fbd02599140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:43.491+0000 7fbd02599140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:43.576+0000 7fbd02599140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'restful'
Oct  9 09:35:43 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:44.000+0000 7fbd02599140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rook'
Oct  9 09:35:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:44.506+0000 7fbd02599140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:44.569+0000 7fbd02599140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:44.642+0000 7fbd02599140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'stats'
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'status'
Oct  9 09:35:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:44.778+0000 7fbd02599140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:44.844+0000 7fbd02599140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:44.983+0000 7fbd02599140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:45.182+0000 7fbd02599140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:45 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.etokpp restarted
Oct  9 09:35:45 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.etokpp started
Oct  9 09:35:45 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.lwqgfy(active, since 10s), standbys: compute-1.etokpp, compute-2.takdnm
Oct  9 09:35:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:45.419+0000 7fbd02599140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:45 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.takdnm restarted
Oct  9 09:35:45 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.takdnm started
Oct  9 09:35:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:45.482+0000 7fbd02599140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct  9 09:35:45 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: ms_deliver_dispatch: unhandled message 0x55745f0a1860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 09:35:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Oct  9 09:35:45 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Oct  9 09:35:45 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.lwqgfy(active, starting, since 0.0135589s), standbys: compute-1.etokpp, compute-2.takdnm
Oct  9 09:35:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ignoring --setuser ceph since I am not root
Oct  9 09:35:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ignoring --setgroup ceph since I am not root
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:45.661+0000 7f3fb8cc9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:45.732+0000 7f3fb8cc9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'crash'
Oct  9 09:35:46 compute-0 ceph-mon[4497]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:46 compute-0 ceph-mon[4497]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:46.399+0000 7f3fb8cc9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-0 ceph-mgr[4772]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:46.943+0000 7f3fb8cc9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-0 ceph-mgr[4772]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:47.087+0000 7f3fb8cc9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'influx'
Oct  9 09:35:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:47.149+0000 7f3fb8cc9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'insights'
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:47.270+0000 7f3fb8cc9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:48.128+0000 7f3fb8cc9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:48.316+0000 7f3fb8cc9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:48.383+0000 7f3fb8cc9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:48.442+0000 7f3fb8cc9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:48.510+0000 7f3fb8cc9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'progress'
Oct  9 09:35:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:48.572+0000 7f3fb8cc9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:48.867+0000 7f3fb8cc9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:48.951+0000 7f3fb8cc9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'restful'
Oct  9 09:35:49 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:49.326+0000 7f3fb8cc9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-0 ceph-mgr[4772]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rook'
Oct  9 09:35:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:49.806+0000 7f3fb8cc9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-0 ceph-mgr[4772]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:49.868+0000 7f3fb8cc9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-0 ceph-mgr[4772]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:49.937+0000 7f3fb8cc9140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-0 ceph-mgr[4772]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'stats'
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'status'
Oct  9 09:35:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:50.066+0000 7f3fb8cc9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:50.127+0000 7f3fb8cc9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:50.259+0000 7f3fb8cc9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:50.449+0000 7f3fb8cc9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.etokpp restarted
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.etokpp started
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.lwqgfy(active, starting, since 5s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:35:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:50.679+0000 7f3fb8cc9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:35:50.739+0000 7f3fb8cc9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: ms_deliver_dispatch: unhandled message 0x556ac14f5860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr handle_mgr_map Activating!
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr handle_mgr_map I am now activating
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.lwqgfy(active, starting, since 0.0130272s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e1 all = 1
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load_all_metadata Skipping incomplete metadata entry
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Manager daemon compute-0.lwqgfy is now available
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: balancer
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [balancer INFO root] Starting
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:35:50
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: cephadm
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: crash
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: dashboard
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [dashboard INFO access_control] Loading user roles DB version=2
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [dashboard INFO sso] Loading SSO DB version=1
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: devicehealth
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: iostat
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: nfs
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: orchestrator
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Starting
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: pg_autoscaler
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: progress
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [progress INFO root] Loading...
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f3f59022130>, <progress.module.GhostEvent object at 0x7f3f59022190>, <progress.module.GhostEvent object at 0x7f3f590221f0>, <progress.module.GhostEvent object at 0x7f3f59022220>] historic events
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [progress INFO root] Loaded OSDMap, ready.
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] recovery thread starting
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] starting setup
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: rbd_support
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: restful
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: status
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: telemetry
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [restful INFO root] server_addr: :: server_port: 8003
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [restful WARNING root] server not running: no certificate configured
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.takdnm restarted
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.takdnm started
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] PerfHandler: starting
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: images, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TaskHandler: starting
Oct  9 09:35:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"} v 0)
Oct  9 09:35:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: [rbd_support INFO root] setup complete
Oct  9 09:35:50 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: volumes
Oct  9 09:35:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct  9 09:35:51 compute-0 systemd-logind[798]: New session 20 of user ceph-admin.
Oct  9 09:35:51 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.module] Engine started.
Oct  9 09:35:51 compute-0 ceph-mon[4497]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:51 compute-0 ceph-mon[4497]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:51 compute-0 ceph-mon[4497]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:35:51 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:35:51 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:35:51 compute-0 podman[20160]: 2025-10-09 09:35:51.665163145 +0000 UTC m=+0.036527449 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:51 compute-0 podman[20160]: 2025-10-09 09:35:51.74441518 +0000 UTC m=+0.115779484 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.lwqgfy(active, since 1.02912s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14370 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  9 09:35:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v3: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  9 09:35:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  9 09:35:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct  9 09:35:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0[4493]: 2025-10-09T09:35:51.789+0000 7fbe8099f640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  9 09:35:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e2 new map
Oct  9 09:35:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-10-09T09:35:51:790448+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:35:51.790428+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Oct  9 09:35:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct  9 09:35:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 09:35:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:51 compute-0 ceph-mgr[4772]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  9 09:35:51 compute-0 systemd[1]: libpod-db292a3c811c1b292e4bba4506ec1f255dfb22bc40d351568d7ba5a55c9a0449.scope: Deactivated successfully.
Oct  9 09:35:51 compute-0 podman[19854]: 2025-10-09 09:35:51.826501755 +0000 UTC m=+9.039950878 container died db292a3c811c1b292e4bba4506ec1f255dfb22bc40d351568d7ba5a55c9a0449 (image=quay.io/ceph/ceph:v19, name=focused_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5aa565d3fa1194e642c96f08d5c10388294c3ee15308314065941b490698416-merged.mount: Deactivated successfully.
Oct  9 09:35:51 compute-0 podman[19854]: 2025-10-09 09:35:51.85305192 +0000 UTC m=+9.066501043 container remove db292a3c811c1b292e4bba4506ec1f255dfb22bc40d351568d7ba5a55c9a0449 (image=quay.io/ceph/ceph:v19, name=focused_robinson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 09:35:51 compute-0 systemd[1]: libpod-conmon-db292a3c811c1b292e4bba4506ec1f255dfb22bc40d351568d7ba5a55c9a0449.scope: Deactivated successfully.
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:35:52 compute-0 podman[20296]: 2025-10-09 09:35:52.081738433 +0000 UTC m=+0.037070827 container exec f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 python3[20280]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:52 compute-0 podman[20318]: 2025-10-09 09:35:52.143260688 +0000 UTC m=+0.046877023 container exec_died f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:52 compute-0 podman[20296]: 2025-10-09 09:35:52.148034699 +0000 UTC m=+0.103367092 container exec_died f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:52 compute-0 podman[20328]: 2025-10-09 09:35:52.169044871 +0000 UTC m=+0.041171222 container create c4fcff7467978624ff5a1577d19a6650b12ec1827b6458af05d4863caf7e3717 (image=quay.io/ceph/ceph:v19, name=dreamy_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:35:52 compute-0 systemd[1]: Started libpod-conmon-c4fcff7467978624ff5a1577d19a6650b12ec1827b6458af05d4863caf7e3717.scope.
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f885e99b0eef748556c7dc5eed05472481cfe6e9bdf3b9aafc6347b4528bc187/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f885e99b0eef748556c7dc5eed05472481cfe6e9bdf3b9aafc6347b4528bc187/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f885e99b0eef748556c7dc5eed05472481cfe6e9bdf3b9aafc6347b4528bc187/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:52 compute-0 podman[20328]: 2025-10-09 09:35:52.217604535 +0000 UTC m=+0.089730896 container init c4fcff7467978624ff5a1577d19a6650b12ec1827b6458af05d4863caf7e3717 (image=quay.io/ceph/ceph:v19, name=dreamy_driscoll, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:52 compute-0 podman[20328]: 2025-10-09 09:35:52.223507531 +0000 UTC m=+0.095633882 container start c4fcff7467978624ff5a1577d19a6650b12ec1827b6458af05d4863caf7e3717 (image=quay.io/ceph/ceph:v19, name=dreamy_driscoll, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:52 compute-0 podman[20328]: 2025-10-09 09:35:52.22607922 +0000 UTC m=+0.098205571 container attach c4fcff7467978624ff5a1577d19a6650b12ec1827b6458af05d4863caf7e3717 (image=quay.io/ceph/ceph:v19, name=dreamy_driscoll, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:52 compute-0 podman[20328]: 2025-10-09 09:35:52.144761823 +0000 UTC m=+0.016888194 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:35:52] ENGINE Bus STARTING
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:35:52] ENGINE Bus STARTING
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.24205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 dreamy_driscoll[20340]: Scheduled mds.cephfs update...
Oct  9 09:35:52 compute-0 systemd[1]: libpod-c4fcff7467978624ff5a1577d19a6650b12ec1827b6458af05d4863caf7e3717.scope: Deactivated successfully.
Oct  9 09:35:52 compute-0 podman[20328]: 2025-10-09 09:35:52.513871736 +0000 UTC m=+0.385998107 container died c4fcff7467978624ff5a1577d19a6650b12ec1827b6458af05d4863caf7e3717 (image=quay.io/ceph/ceph:v19, name=dreamy_driscoll, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f885e99b0eef748556c7dc5eed05472481cfe6e9bdf3b9aafc6347b4528bc187-merged.mount: Deactivated successfully.
Oct  9 09:35:52 compute-0 podman[20328]: 2025-10-09 09:35:52.536178582 +0000 UTC m=+0.408304932 container remove c4fcff7467978624ff5a1577d19a6650b12ec1827b6458af05d4863caf7e3717 (image=quay.io/ceph/ceph:v19, name=dreamy_driscoll, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:52 compute-0 systemd[1]: libpod-conmon-c4fcff7467978624ff5a1577d19a6650b12ec1827b6458af05d4863caf7e3717.scope: Deactivated successfully.
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:35:52] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:35:52] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:35:52] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:35:52] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:35:52] ENGINE Bus STARTED
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:35:52] ENGINE Bus STARTED
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:35:52] ENGINE Client ('192.168.122.100', 36178) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:35:52] ENGINE Client ('192.168.122.100', 36178) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v5: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  9 09:35:52 compute-0 ceph-mon[4497]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  9 09:35:52 compute-0 ceph-mon[4497]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: [09/Oct/2025:09:35:52] ENGINE Bus STARTING
Oct  9 09:35:52 compute-0 ceph-mon[4497]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:52 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 python3[20503]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:52 compute-0 podman[20554]: 2025-10-09 09:35:52.832033075 +0000 UTC m=+0.033607104 container create 3eb03119d8cb1196e379fdad0f2f191864d9b68b88c276be59a7c991278a416b (image=quay.io/ceph/ceph:v19, name=upbeat_feynman, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Check health
Oct  9 09:35:52 compute-0 systemd[1]: Started libpod-conmon-3eb03119d8cb1196e379fdad0f2f191864d9b68b88c276be59a7c991278a416b.scope.
Oct  9 09:35:52 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ed9e83dffe8a2d52e7ff7ac03f34beb1f29208a07aa57399df13357e7c1d64/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ed9e83dffe8a2d52e7ff7ac03f34beb1f29208a07aa57399df13357e7c1d64/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ed9e83dffe8a2d52e7ff7ac03f34beb1f29208a07aa57399df13357e7c1d64/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:52 compute-0 podman[20554]: 2025-10-09 09:35:52.877781112 +0000 UTC m=+0.079355161 container init 3eb03119d8cb1196e379fdad0f2f191864d9b68b88c276be59a7c991278a416b (image=quay.io/ceph/ceph:v19, name=upbeat_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:52 compute-0 podman[20554]: 2025-10-09 09:35:52.88323449 +0000 UTC m=+0.084808520 container start 3eb03119d8cb1196e379fdad0f2f191864d9b68b88c276be59a7c991278a416b (image=quay.io/ceph/ceph:v19, name=upbeat_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:52 compute-0 podman[20554]: 2025-10-09 09:35:52.884605138 +0000 UTC m=+0.086179167 container attach 3eb03119d8cb1196e379fdad0f2f191864d9b68b88c276be59a7c991278a416b (image=quay.io/ceph/ceph:v19, name=upbeat_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 09:35:52 compute-0 podman[20554]: 2025-10-09 09:35:52.817616079 +0000 UTC m=+0.019190127 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Oct  9 09:35:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 128.5M
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 128.5M
Oct  9 09:35:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:52 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:35:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:35:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  9 09:35:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:53 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:35:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 09:35:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Oct  9 09:35:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.lwqgfy(active, since 2s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:53 compute-0 ceph-mon[4497]: [09/Oct/2025:09:35:52] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:35:53 compute-0 ceph-mon[4497]: [09/Oct/2025:09:35:52] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:35:53 compute-0 ceph-mon[4497]: [09/Oct/2025:09:35:52] ENGINE Bus STARTED
Oct  9 09:35:53 compute-0 ceph-mon[4497]: [09/Oct/2025:09:35:52] ENGINE Client ('192.168.122.100', 36178) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:53 compute-0 ceph-mon[4497]: Adjusting osd_memory_target on compute-1 to 128.5M
Oct  9 09:35:53 compute-0 ceph-mon[4497]: Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:53 compute-0 ceph-mon[4497]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mon[4497]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mon[4497]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:35:53 compute-0 ceph-mon[4497]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mon[4497]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-0 ceph-mon[4497]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:35:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 887becba-bd42-41e6-bb69-dc6391df0b2c (Updating node-exporter deployment (+2 -> 3))
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Oct  9 09:35:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v7: 39 pgs: 1 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 29 pg[8.0( empty local-lis/les=0/0 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct  9 09:35:55 compute-0 ceph-mon[4497]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:55 compute-0 ceph-mon[4497]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:55 compute-0 ceph-mon[4497]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:55 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct  9 09:35:55 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct  9 09:35:55 compute-0 ceph-mon[4497]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:55 compute-0 ceph-mon[4497]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:55 compute-0 ceph-mon[4497]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:55 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct  9 09:35:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Oct  9 09:35:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Oct  9 09:35:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:35:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:35:55 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:35:55 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 30 pg[8.0( empty local-lis/les=29/30 n=0 ec=29/29 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:55 compute-0 ceph-mgr[4772]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Oct  9 09:35:55 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:55 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:35:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-0 ceph-mgr[4772]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:55 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 09:35:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-0 systemd[1]: libpod-3eb03119d8cb1196e379fdad0f2f191864d9b68b88c276be59a7c991278a416b.scope: Deactivated successfully.
Oct  9 09:35:55 compute-0 podman[21522]: 2025-10-09 09:35:55.089157321 +0000 UTC m=+0.017760800 container died 3eb03119d8cb1196e379fdad0f2f191864d9b68b88c276be59a7c991278a416b (image=quay.io/ceph/ceph:v19, name=upbeat_feynman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2ed9e83dffe8a2d52e7ff7ac03f34beb1f29208a07aa57399df13357e7c1d64-merged.mount: Deactivated successfully.
Oct  9 09:35:55 compute-0 podman[21522]: 2025-10-09 09:35:55.10579117 +0000 UTC m=+0.034394639 container remove 3eb03119d8cb1196e379fdad0f2f191864d9b68b88c276be59a7c991278a416b (image=quay.io/ceph/ceph:v19, name=upbeat_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 09:35:55 compute-0 systemd[1]: libpod-conmon-3eb03119d8cb1196e379fdad0f2f191864d9b68b88c276be59a7c991278a416b.scope: Deactivated successfully.
Oct  9 09:35:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.lwqgfy(active, since 4s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:35:55 compute-0 python3[21610]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 09:35:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:55 compute-0 ceph-mgr[4772]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Oct  9 09:35:55 compute-0 python3[21683]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002555.3775275-34378-106138059730211/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=f2b8c5d3158b549e18e5631f97d7800b8ceae49e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:35:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct  9 09:35:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Oct  9 09:35:56 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Oct  9 09:35:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:35:56 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:35:56 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:35:56 compute-0 ceph-mon[4497]: Deploying daemon node-exporter.compute-1 on compute-1
Oct  9 09:35:56 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct  9 09:35:56 compute-0 ceph-mon[4497]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:56 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:56 compute-0 ceph-mon[4497]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:56 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:56 compute-0 ceph-mon[4497]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:56 compute-0 python3[21733]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:56 compute-0 podman[21734]: 2025-10-09 09:35:56.27828368 +0000 UTC m=+0.030923893 container create c3dd2ce55e324b85ca09c51431a74bafcb81158ed94ac0082738dcb1817fc6ab (image=quay.io/ceph/ceph:v19, name=keen_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  9 09:35:56 compute-0 systemd[1]: Started libpod-conmon-c3dd2ce55e324b85ca09c51431a74bafcb81158ed94ac0082738dcb1817fc6ab.scope.
Oct  9 09:35:56 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e01064fcbd62b5ed8b31ed43382cbfd5e7a4bcdf4c06c571d9e0172523772c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e01064fcbd62b5ed8b31ed43382cbfd5e7a4bcdf4c06c571d9e0172523772c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:56 compute-0 podman[21734]: 2025-10-09 09:35:56.33089126 +0000 UTC m=+0.083531494 container init c3dd2ce55e324b85ca09c51431a74bafcb81158ed94ac0082738dcb1817fc6ab (image=quay.io/ceph/ceph:v19, name=keen_jang, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:56 compute-0 podman[21734]: 2025-10-09 09:35:56.336200452 +0000 UTC m=+0.088840667 container start c3dd2ce55e324b85ca09c51431a74bafcb81158ed94ac0082738dcb1817fc6ab (image=quay.io/ceph/ceph:v19, name=keen_jang, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:56 compute-0 podman[21734]: 2025-10-09 09:35:56.339293657 +0000 UTC m=+0.091933871 container attach c3dd2ce55e324b85ca09c51431a74bafcb81158ed94ac0082738dcb1817fc6ab (image=quay.io/ceph/ceph:v19, name=keen_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:56 compute-0 podman[21734]: 2025-10-09 09:35:56.265000437 +0000 UTC m=+0.017640661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Oct  9 09:35:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  9 09:35:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  9 09:35:56 compute-0 systemd[1]: libpod-c3dd2ce55e324b85ca09c51431a74bafcb81158ed94ac0082738dcb1817fc6ab.scope: Deactivated successfully.
Oct  9 09:35:56 compute-0 podman[21734]: 2025-10-09 09:35:56.683314634 +0000 UTC m=+0.435954849 container died c3dd2ce55e324b85ca09c51431a74bafcb81158ed94ac0082738dcb1817fc6ab (image=quay.io/ceph/ceph:v19, name=keen_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 09:35:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5e01064fcbd62b5ed8b31ed43382cbfd5e7a4bcdf4c06c571d9e0172523772c-merged.mount: Deactivated successfully.
Oct  9 09:35:56 compute-0 podman[21734]: 2025-10-09 09:35:56.701581499 +0000 UTC m=+0.454221714 container remove c3dd2ce55e324b85ca09c51431a74bafcb81158ed94ac0082738dcb1817fc6ab (image=quay.io/ceph/ceph:v19, name=keen_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:56 compute-0 systemd[1]: libpod-conmon-c3dd2ce55e324b85ca09c51431a74bafcb81158ed94ac0082738dcb1817fc6ab.scope: Deactivated successfully.
Oct  9 09:35:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v10: 39 pgs: 1 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:35:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:35:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:35:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct  9 09:35:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:56 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Oct  9 09:35:56 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Oct  9 09:35:57 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  9 09:35:57 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  9 09:35:57 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:57 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:57 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:57 compute-0 python3[21808]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:57 compute-0 podman[21810]: 2025-10-09 09:35:57.329319934 +0000 UTC m=+0.026757943 container create d480fc9eb8a9c032f9e9041989d22e2043764464ce147b0a91da213d793092f4 (image=quay.io/ceph/ceph:v19, name=distracted_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 09:35:57 compute-0 systemd[1]: Started libpod-conmon-d480fc9eb8a9c032f9e9041989d22e2043764464ce147b0a91da213d793092f4.scope.
Oct  9 09:35:57 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83be20e4d5b9ebfe66d3b9262455a104892fdde1cf7fe9549fa670eef90ae15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83be20e4d5b9ebfe66d3b9262455a104892fdde1cf7fe9549fa670eef90ae15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:57 compute-0 podman[21810]: 2025-10-09 09:35:57.387870177 +0000 UTC m=+0.085308185 container init d480fc9eb8a9c032f9e9041989d22e2043764464ce147b0a91da213d793092f4 (image=quay.io/ceph/ceph:v19, name=distracted_jones, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 09:35:57 compute-0 podman[21810]: 2025-10-09 09:35:57.392406535 +0000 UTC m=+0.089844534 container start d480fc9eb8a9c032f9e9041989d22e2043764464ce147b0a91da213d793092f4 (image=quay.io/ceph/ceph:v19, name=distracted_jones, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:57 compute-0 podman[21810]: 2025-10-09 09:35:57.393897991 +0000 UTC m=+0.091336001 container attach d480fc9eb8a9c032f9e9041989d22e2043764464ce147b0a91da213d793092f4 (image=quay.io/ceph/ceph:v19, name=distracted_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:35:57 compute-0 podman[21810]: 2025-10-09 09:35:57.318565727 +0000 UTC m=+0.016003757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:57 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  9 09:35:57 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.lwqgfy(active, since 6s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:35:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  9 09:35:57 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1035192713' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  9 09:35:57 compute-0 distracted_jones[21823]: 
Oct  9 09:35:57 compute-0 distracted_jones[21823]: {"fsid":"286f8bf0-da72-5823-9a4e-ac4457d9e609","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":33,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":2,"osd_up_since":1760002494,"num_in_osds":3,"osd_in_since":1760002528,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":38},{"state_name":"unknown","count":1}],"num_pgs":39,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":56004608,"bytes_avail":42885279744,"bytes_total":42941284352,"unknown_pgs_ratio":0.025641025975346565},"fsmap":{"epoch":2,"btime":"2025-10-09T09:35:51:790448+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":2,"modified":"2025-10-09T09:34:58.583309+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"887becba-bd42-41e6-bb69-dc6391df0b2c":{"message":"Updating node-exporter deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true},"8d7607cc-d5d7-4944-ab7c-e08e7477d3d1":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct  9 09:35:57 compute-0 systemd[1]: libpod-d480fc9eb8a9c032f9e9041989d22e2043764464ce147b0a91da213d793092f4.scope: Deactivated successfully.
Oct  9 09:35:57 compute-0 podman[21810]: 2025-10-09 09:35:57.731301117 +0000 UTC m=+0.428739136 container died d480fc9eb8a9c032f9e9041989d22e2043764464ce147b0a91da213d793092f4 (image=quay.io/ceph/ceph:v19, name=distracted_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d83be20e4d5b9ebfe66d3b9262455a104892fdde1cf7fe9549fa670eef90ae15-merged.mount: Deactivated successfully.
Oct  9 09:35:57 compute-0 podman[21810]: 2025-10-09 09:35:57.752125574 +0000 UTC m=+0.449563582 container remove d480fc9eb8a9c032f9e9041989d22e2043764464ce147b0a91da213d793092f4 (image=quay.io/ceph/ceph:v19, name=distracted_jones, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:57 compute-0 systemd[1]: libpod-conmon-d480fc9eb8a9c032f9e9041989d22e2043764464ce147b0a91da213d793092f4.scope: Deactivated successfully.
Oct  9 09:35:57 compute-0 python3[21883]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:58 compute-0 podman[21884]: 2025-10-09 09:35:58.014388472 +0000 UTC m=+0.026543854 container create cdacc372b9c87c1ebdb3d76b1096a746d5b37757efad4e2689bca068b8ca6dbd (image=quay.io/ceph/ceph:v19, name=interesting_darwin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:35:58 compute-0 systemd[1]: Started libpod-conmon-cdacc372b9c87c1ebdb3d76b1096a746d5b37757efad4e2689bca068b8ca6dbd.scope.
Oct  9 09:35:58 compute-0 ceph-mon[4497]: Deploying daemon node-exporter.compute-2 on compute-2
Oct  9 09:35:58 compute-0 ceph-mon[4497]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  9 09:35:58 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e9596eeb5b83ac4b85568652d6b691a430a7a971cdd08467ce9f7d35f5a23b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e9596eeb5b83ac4b85568652d6b691a430a7a971cdd08467ce9f7d35f5a23b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:58 compute-0 podman[21884]: 2025-10-09 09:35:58.059968739 +0000 UTC m=+0.072124130 container init cdacc372b9c87c1ebdb3d76b1096a746d5b37757efad4e2689bca068b8ca6dbd (image=quay.io/ceph/ceph:v19, name=interesting_darwin, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 09:35:58 compute-0 podman[21884]: 2025-10-09 09:35:58.065167192 +0000 UTC m=+0.077322573 container start cdacc372b9c87c1ebdb3d76b1096a746d5b37757efad4e2689bca068b8ca6dbd (image=quay.io/ceph/ceph:v19, name=interesting_darwin, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:35:58 compute-0 podman[21884]: 2025-10-09 09:35:58.066332305 +0000 UTC m=+0.078487686 container attach cdacc372b9c87c1ebdb3d76b1096a746d5b37757efad4e2689bca068b8ca6dbd (image=quay.io/ceph/ceph:v19, name=interesting_darwin, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:35:58 compute-0 podman[21884]: 2025-10-09 09:35:58.003692016 +0000 UTC m=+0.015847417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:35:58 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1636592391' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:35:58 compute-0 interesting_darwin[21896]: 
Oct  9 09:35:58 compute-0 interesting_darwin[21896]: {"epoch":3,"fsid":"286f8bf0-da72-5823-9a4e-ac4457d9e609","modified":"2025-10-09T09:35:19.619597Z","created":"2025-10-09T09:33:38.201593Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Oct  9 09:35:58 compute-0 interesting_darwin[21896]: dumped monmap epoch 3
Oct  9 09:35:58 compute-0 systemd[1]: libpod-cdacc372b9c87c1ebdb3d76b1096a746d5b37757efad4e2689bca068b8ca6dbd.scope: Deactivated successfully.
Oct  9 09:35:58 compute-0 podman[21884]: 2025-10-09 09:35:58.395500033 +0000 UTC m=+0.407655424 container died cdacc372b9c87c1ebdb3d76b1096a746d5b37757efad4e2689bca068b8ca6dbd (image=quay.io/ceph/ceph:v19, name=interesting_darwin, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:35:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-87e9596eeb5b83ac4b85568652d6b691a430a7a971cdd08467ce9f7d35f5a23b-merged.mount: Deactivated successfully.
Oct  9 09:35:58 compute-0 podman[21884]: 2025-10-09 09:35:58.412849897 +0000 UTC m=+0.425005278 container remove cdacc372b9c87c1ebdb3d76b1096a746d5b37757efad4e2689bca068b8ca6dbd (image=quay.io/ceph/ceph:v19, name=interesting_darwin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 09:35:58 compute-0 systemd[1]: libpod-conmon-cdacc372b9c87c1ebdb3d76b1096a746d5b37757efad4e2689bca068b8ca6dbd.scope: Deactivated successfully.
Oct  9 09:35:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v11: 39 pgs: 39 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct  9 09:35:58 compute-0 python3[21956]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:58 compute-0 podman[21957]: 2025-10-09 09:35:58.899352972 +0000 UTC m=+0.025657802 container create b2bf4f622edb4eec588c0cdf741ef37831fe4cd556aab2d049eff2828b2cb59d (image=quay.io/ceph/ceph:v19, name=mystifying_shtern, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:58 compute-0 systemd[1]: Started libpod-conmon-b2bf4f622edb4eec588c0cdf741ef37831fe4cd556aab2d049eff2828b2cb59d.scope.
Oct  9 09:35:58 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b767d38c92a052b5d6fc648b3e3c4f06c79bc6383acafefc02a2f12750c557/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b767d38c92a052b5d6fc648b3e3c4f06c79bc6383acafefc02a2f12750c557/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:58 compute-0 podman[21957]: 2025-10-09 09:35:58.949323409 +0000 UTC m=+0.075628269 container init b2bf4f622edb4eec588c0cdf741ef37831fe4cd556aab2d049eff2828b2cb59d (image=quay.io/ceph/ceph:v19, name=mystifying_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 09:35:58 compute-0 podman[21957]: 2025-10-09 09:35:58.953498166 +0000 UTC m=+0.079803006 container start b2bf4f622edb4eec588c0cdf741ef37831fe4cd556aab2d049eff2828b2cb59d (image=quay.io/ceph/ceph:v19, name=mystifying_shtern, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 09:35:58 compute-0 podman[21957]: 2025-10-09 09:35:58.95461056 +0000 UTC m=+0.080915410 container attach b2bf4f622edb4eec588c0cdf741ef37831fe4cd556aab2d049eff2828b2cb59d (image=quay.io/ceph/ceph:v19, name=mystifying_shtern, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:35:58 compute-0 podman[21957]: 2025-10-09 09:35:58.889242615 +0000 UTC m=+0.015547475 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:35:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Oct  9 09:35:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1429686175' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  9 09:35:59 compute-0 mystifying_shtern[21970]: [client.openstack]
Oct  9 09:35:59 compute-0 mystifying_shtern[21970]: #011key = AQBWgedoAAAAABAA+vk8nE5nieplThBL84fakw==
Oct  9 09:35:59 compute-0 mystifying_shtern[21970]: #011caps mgr = "allow *"
Oct  9 09:35:59 compute-0 mystifying_shtern[21970]: #011caps mon = "profile rbd"
Oct  9 09:35:59 compute-0 mystifying_shtern[21970]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct  9 09:35:59 compute-0 systemd[1]: libpod-b2bf4f622edb4eec588c0cdf741ef37831fe4cd556aab2d049eff2828b2cb59d.scope: Deactivated successfully.
Oct  9 09:35:59 compute-0 podman[21957]: 2025-10-09 09:35:59.276027614 +0000 UTC m=+0.402332453 container died b2bf4f622edb4eec588c0cdf741ef37831fe4cd556aab2d049eff2828b2cb59d (image=quay.io/ceph/ceph:v19, name=mystifying_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:35:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:35:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-36b767d38c92a052b5d6fc648b3e3c4f06c79bc6383acafefc02a2f12750c557-merged.mount: Deactivated successfully.
Oct  9 09:35:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:35:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct  9 09:35:59 compute-0 podman[21957]: 2025-10-09 09:35:59.295850378 +0000 UTC m=+0.422155219 container remove b2bf4f622edb4eec588c0cdf741ef37831fe4cd556aab2d049eff2828b2cb59d (image=quay.io/ceph/ceph:v19, name=mystifying_shtern, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:35:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 887becba-bd42-41e6-bb69-dc6391df0b2c (Updating node-exporter deployment (+2 -> 3))
Oct  9 09:35:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 887becba-bd42-41e6-bb69-dc6391df0b2c (Updating node-exporter deployment (+2 -> 3)) in 5 seconds
Oct  9 09:35:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Oct  9 09:35:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:35:59 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:35:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:35:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:35:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:59 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:35:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:35:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:35:59 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:35:59 compute-0 systemd[1]: libpod-conmon-b2bf4f622edb4eec588c0cdf741ef37831fe4cd556aab2d049eff2828b2cb59d.scope: Deactivated successfully.
Oct  9 09:35:59 compute-0 podman[22085]: 2025-10-09 09:35:59.659467176 +0000 UTC m=+0.025493169 container create a61a5be5732c839bd9add6e40214fe323f730276b22423c3dfcb480e51404cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:59 compute-0 systemd[5737]: Starting Mark boot as successful...
Oct  9 09:35:59 compute-0 systemd[1]: Started libpod-conmon-a61a5be5732c839bd9add6e40214fe323f730276b22423c3dfcb480e51404cd9.scope.
Oct  9 09:35:59 compute-0 systemd[5737]: Finished Mark boot as successful.
Oct  9 09:35:59 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:59 compute-0 podman[22085]: 2025-10-09 09:35:59.694506454 +0000 UTC m=+0.060532457 container init a61a5be5732c839bd9add6e40214fe323f730276b22423c3dfcb480e51404cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:59 compute-0 podman[22085]: 2025-10-09 09:35:59.698540754 +0000 UTC m=+0.064566736 container start a61a5be5732c839bd9add6e40214fe323f730276b22423c3dfcb480e51404cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:35:59 compute-0 hopeful_antonelli[22101]: 167 167
Oct  9 09:35:59 compute-0 podman[22085]: 2025-10-09 09:35:59.699577273 +0000 UTC m=+0.065603255 container attach a61a5be5732c839bd9add6e40214fe323f730276b22423c3dfcb480e51404cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:35:59 compute-0 systemd[1]: libpod-a61a5be5732c839bd9add6e40214fe323f730276b22423c3dfcb480e51404cd9.scope: Deactivated successfully.
Oct  9 09:35:59 compute-0 podman[22085]: 2025-10-09 09:35:59.702318835 +0000 UTC m=+0.068344817 container died a61a5be5732c839bd9add6e40214fe323f730276b22423c3dfcb480e51404cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f3b4073953c9a84907f8ba51831b0697583d461f881f15d3b46f8c722aa2922-merged.mount: Deactivated successfully.
Oct  9 09:35:59 compute-0 podman[22085]: 2025-10-09 09:35:59.719087141 +0000 UTC m=+0.085113122 container remove a61a5be5732c839bd9add6e40214fe323f730276b22423c3dfcb480e51404cd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_antonelli, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:59 compute-0 podman[22085]: 2025-10-09 09:35:59.64967644 +0000 UTC m=+0.015702442 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:59 compute-0 systemd[1]: libpod-conmon-a61a5be5732c839bd9add6e40214fe323f730276b22423c3dfcb480e51404cd9.scope: Deactivated successfully.
Oct  9 09:35:59 compute-0 podman[22122]: 2025-10-09 09:35:59.825295496 +0000 UTC m=+0.025628869 container create 8b6b77cb475dcbed6492754d9f0fd193c51c251188208b15ecf794863cb954f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:59 compute-0 systemd[1]: Started libpod-conmon-8b6b77cb475dcbed6492754d9f0fd193c51c251188208b15ecf794863cb954f1.scope.
Oct  9 09:35:59 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92d6721a1bdf3f54685327f64993b20a14968d5dca602ae1ef491a9a432d6c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92d6721a1bdf3f54685327f64993b20a14968d5dca602ae1ef491a9a432d6c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92d6721a1bdf3f54685327f64993b20a14968d5dca602ae1ef491a9a432d6c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92d6721a1bdf3f54685327f64993b20a14968d5dca602ae1ef491a9a432d6c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92d6721a1bdf3f54685327f64993b20a14968d5dca602ae1ef491a9a432d6c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-0 podman[22122]: 2025-10-09 09:35:59.867213071 +0000 UTC m=+0.067546435 container init 8b6b77cb475dcbed6492754d9f0fd193c51c251188208b15ecf794863cb954f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:59 compute-0 podman[22122]: 2025-10-09 09:35:59.875321447 +0000 UTC m=+0.075659740 container start 8b6b77cb475dcbed6492754d9f0fd193c51c251188208b15ecf794863cb954f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:59 compute-0 podman[22122]: 2025-10-09 09:35:59.876463948 +0000 UTC m=+0.076797311 container attach 8b6b77cb475dcbed6492754d9f0fd193c51c251188208b15ecf794863cb954f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 09:35:59 compute-0 podman[22122]: 2025-10-09 09:35:59.814913289 +0000 UTC m=+0.015246672 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:00 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/1429686175' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  9 09:36:00 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:36:00 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:36:00 compute-0 infallible_wilbur[22136]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:36:00 compute-0 infallible_wilbur[22136]: --> All data devices are unavailable
Oct  9 09:36:00 compute-0 systemd[1]: libpod-8b6b77cb475dcbed6492754d9f0fd193c51c251188208b15ecf794863cb954f1.scope: Deactivated successfully.
Oct  9 09:36:00 compute-0 podman[22122]: 2025-10-09 09:36:00.134845008 +0000 UTC m=+0.335178381 container died 8b6b77cb475dcbed6492754d9f0fd193c51c251188208b15ecf794863cb954f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Oct  9 09:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c92d6721a1bdf3f54685327f64993b20a14968d5dca602ae1ef491a9a432d6c7-merged.mount: Deactivated successfully.
Oct  9 09:36:00 compute-0 podman[22122]: 2025-10-09 09:36:00.156391111 +0000 UTC m=+0.356724474 container remove 8b6b77cb475dcbed6492754d9f0fd193c51c251188208b15ecf794863cb954f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_wilbur, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:00 compute-0 systemd[1]: libpod-conmon-8b6b77cb475dcbed6492754d9f0fd193c51c251188208b15ecf794863cb954f1.scope: Deactivated successfully.
Oct  9 09:36:00 compute-0 ansible-async_wrapper.py[22361]: Invoked with j887663001671 30 /home/zuul/.ansible/tmp/ansible-tmp-1760002560.0979896-34450-229291880363633/AnsiballZ_command.py _
Oct  9 09:36:00 compute-0 ansible-async_wrapper.py[22373]: Starting module and watcher
Oct  9 09:36:00 compute-0 ansible-async_wrapper.py[22373]: Start watching 22374 (30)
Oct  9 09:36:00 compute-0 ansible-async_wrapper.py[22374]: Start module (22374)
Oct  9 09:36:00 compute-0 ansible-async_wrapper.py[22361]: Return async_wrapper task started.
Oct  9 09:36:00 compute-0 podman[22398]: 2025-10-09 09:36:00.562461188 +0000 UTC m=+0.026114765 container create a47d953892de70e0fcd183775a40be63652dcfc168ae7e6fd35033b9aa6ca8e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_einstein, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:00 compute-0 python3[22376]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:36:00 compute-0 systemd[1]: Started libpod-conmon-a47d953892de70e0fcd183775a40be63652dcfc168ae7e6fd35033b9aa6ca8e8.scope.
Oct  9 09:36:00 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:00 compute-0 podman[22409]: 2025-10-09 09:36:00.608019213 +0000 UTC m=+0.029521878 container create 58ec8b756879d9e362295e1bba8d3365d3a921a36c166a349c715df36dbcb3f2 (image=quay.io/ceph/ceph:v19, name=dazzling_hawking, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:00 compute-0 podman[22398]: 2025-10-09 09:36:00.613814995 +0000 UTC m=+0.077468591 container init a47d953892de70e0fcd183775a40be63652dcfc168ae7e6fd35033b9aa6ca8e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_einstein, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Oct  9 09:36:00 compute-0 podman[22398]: 2025-10-09 09:36:00.618539903 +0000 UTC m=+0.082193479 container start a47d953892de70e0fcd183775a40be63652dcfc168ae7e6fd35033b9aa6ca8e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:00 compute-0 podman[22398]: 2025-10-09 09:36:00.620178531 +0000 UTC m=+0.083832117 container attach a47d953892de70e0fcd183775a40be63652dcfc168ae7e6fd35033b9aa6ca8e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:36:00 compute-0 upbeat_einstein[22418]: 167 167
Oct  9 09:36:00 compute-0 podman[22398]: 2025-10-09 09:36:00.622463001 +0000 UTC m=+0.086116577 container died a47d953892de70e0fcd183775a40be63652dcfc168ae7e6fd35033b9aa6ca8e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_einstein, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 09:36:00 compute-0 systemd[1]: Started libpod-conmon-58ec8b756879d9e362295e1bba8d3365d3a921a36c166a349c715df36dbcb3f2.scope.
Oct  9 09:36:00 compute-0 systemd[1]: libpod-a47d953892de70e0fcd183775a40be63652dcfc168ae7e6fd35033b9aa6ca8e8.scope: Deactivated successfully.
Oct  9 09:36:00 compute-0 podman[22398]: 2025-10-09 09:36:00.641543239 +0000 UTC m=+0.105196815 container remove a47d953892de70e0fcd183775a40be63652dcfc168ae7e6fd35033b9aa6ca8e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:36:00 compute-0 podman[22398]: 2025-10-09 09:36:00.551740246 +0000 UTC m=+0.015393832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:00 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230d74cfe9f702340eedeaf7a0e16c5c20bbdd96da32cd03296ec4b5715dcbe2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230d74cfe9f702340eedeaf7a0e16c5c20bbdd96da32cd03296ec4b5715dcbe2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:00 compute-0 podman[22409]: 2025-10-09 09:36:00.653791957 +0000 UTC m=+0.075294632 container init 58ec8b756879d9e362295e1bba8d3365d3a921a36c166a349c715df36dbcb3f2 (image=quay.io/ceph/ceph:v19, name=dazzling_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 09:36:00 compute-0 systemd[1]: libpod-conmon-a47d953892de70e0fcd183775a40be63652dcfc168ae7e6fd35033b9aa6ca8e8.scope: Deactivated successfully.
Oct  9 09:36:00 compute-0 podman[22409]: 2025-10-09 09:36:00.659363201 +0000 UTC m=+0.080865865 container start 58ec8b756879d9e362295e1bba8d3365d3a921a36c166a349c715df36dbcb3f2 (image=quay.io/ceph/ceph:v19, name=dazzling_hawking, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:36:00 compute-0 podman[22409]: 2025-10-09 09:36:00.660795344 +0000 UTC m=+0.082298009 container attach 58ec8b756879d9e362295e1bba8d3365d3a921a36c166a349c715df36dbcb3f2 (image=quay.io/ceph/ceph:v19, name=dazzling_hawking, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  9 09:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-29de192dd73079b85d0445473b51ad6fc7d33551f7894d0e8f075b4d000c5c44-merged.mount: Deactivated successfully.
Oct  9 09:36:00 compute-0 podman[22409]: 2025-10-09 09:36:00.596684178 +0000 UTC m=+0.018186843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:36:00 compute-0 podman[22448]: 2025-10-09 09:36:00.758760516 +0000 UTC m=+0.029883577 container create 50a939c28b786e1212272917111c966c9d1c9c30bafb1f6fa7963d52f796b03d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v12: 39 pgs: 39 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Oct  9 09:36:00 compute-0 systemd[1]: Started libpod-conmon-50a939c28b786e1212272917111c966c9d1c9c30bafb1f6fa7963d52f796b03d.scope.
Oct  9 09:36:00 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4818f82f789f7b50bd4936890c3ac352bfbb000faf35e56c6fbd1ade524a1e13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4818f82f789f7b50bd4936890c3ac352bfbb000faf35e56c6fbd1ade524a1e13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4818f82f789f7b50bd4936890c3ac352bfbb000faf35e56c6fbd1ade524a1e13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4818f82f789f7b50bd4936890c3ac352bfbb000faf35e56c6fbd1ade524a1e13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:00 compute-0 podman[22448]: 2025-10-09 09:36:00.79898992 +0000 UTC m=+0.070112992 container init 50a939c28b786e1212272917111c966c9d1c9c30bafb1f6fa7963d52f796b03d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_pare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:36:00 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 5 completed events
Oct  9 09:36:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:36:00 compute-0 podman[22448]: 2025-10-09 09:36:00.809474661 +0000 UTC m=+0.080597723 container start 50a939c28b786e1212272917111c966c9d1c9c30bafb1f6fa7963d52f796b03d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-0 podman[22448]: 2025-10-09 09:36:00.813200322 +0000 UTC m=+0.084323384 container attach 50a939c28b786e1212272917111c966c9d1c9c30bafb1f6fa7963d52f796b03d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:00 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 8d7607cc-d5d7-4944-ab7c-e08e7477d3d1 (Global Recovery Event) in 5 seconds
Oct  9 09:36:00 compute-0 podman[22448]: 2025-10-09 09:36:00.746739481 +0000 UTC m=+0.017862564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:00 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.24245 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 09:36:00 compute-0 dazzling_hawking[22429]: 
Oct  9 09:36:00 compute-0 dazzling_hawking[22429]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  9 09:36:00 compute-0 systemd[1]: libpod-58ec8b756879d9e362295e1bba8d3365d3a921a36c166a349c715df36dbcb3f2.scope: Deactivated successfully.
Oct  9 09:36:00 compute-0 podman[22409]: 2025-10-09 09:36:00.958250153 +0000 UTC m=+0.379752828 container died 58ec8b756879d9e362295e1bba8d3365d3a921a36c166a349c715df36dbcb3f2 (image=quay.io/ceph/ceph:v19, name=dazzling_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 09:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-230d74cfe9f702340eedeaf7a0e16c5c20bbdd96da32cd03296ec4b5715dcbe2-merged.mount: Deactivated successfully.
Oct  9 09:36:00 compute-0 podman[22409]: 2025-10-09 09:36:00.977897311 +0000 UTC m=+0.399399977 container remove 58ec8b756879d9e362295e1bba8d3365d3a921a36c166a349c715df36dbcb3f2 (image=quay.io/ceph/ceph:v19, name=dazzling_hawking, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:36:00 compute-0 ansible-async_wrapper.py[22374]: Module complete (22374)
Oct  9 09:36:00 compute-0 systemd[1]: libpod-conmon-58ec8b756879d9e362295e1bba8d3365d3a921a36c166a349c715df36dbcb3f2.scope: Deactivated successfully.
Oct  9 09:36:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:01 compute-0 cool_pare[22480]: {
Oct  9 09:36:01 compute-0 cool_pare[22480]:    "1": [
Oct  9 09:36:01 compute-0 cool_pare[22480]:        {
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "devices": [
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "/dev/loop3"
Oct  9 09:36:01 compute-0 cool_pare[22480]:            ],
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "lv_name": "ceph_lv0",
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "lv_size": "21470642176",
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "name": "ceph_lv0",
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "tags": {
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.cluster_name": "ceph",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.crush_device_class": "",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.encrypted": "0",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.osd_id": "1",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.type": "block",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.vdo": "0",
Oct  9 09:36:01 compute-0 cool_pare[22480]:                "ceph.with_tpm": "0"
Oct  9 09:36:01 compute-0 cool_pare[22480]:            },
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "type": "block",
Oct  9 09:36:01 compute-0 cool_pare[22480]:            "vg_name": "ceph_vg0"
Oct  9 09:36:01 compute-0 cool_pare[22480]:        }
Oct  9 09:36:01 compute-0 cool_pare[22480]:    ]
Oct  9 09:36:01 compute-0 cool_pare[22480]: }
Oct  9 09:36:01 compute-0 systemd[1]: libpod-50a939c28b786e1212272917111c966c9d1c9c30bafb1f6fa7963d52f796b03d.scope: Deactivated successfully.
Oct  9 09:36:01 compute-0 podman[22448]: 2025-10-09 09:36:01.038868786 +0000 UTC m=+0.309991847 container died 50a939c28b786e1212272917111c966c9d1c9c30bafb1f6fa7963d52f796b03d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4818f82f789f7b50bd4936890c3ac352bfbb000faf35e56c6fbd1ade524a1e13-merged.mount: Deactivated successfully.
Oct  9 09:36:01 compute-0 podman[22448]: 2025-10-09 09:36:01.060517706 +0000 UTC m=+0.331640768 container remove 50a939c28b786e1212272917111c966c9d1c9c30bafb1f6fa7963d52f796b03d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_pare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:01 compute-0 systemd[1]: libpod-conmon-50a939c28b786e1212272917111c966c9d1c9c30bafb1f6fa7963d52f796b03d.scope: Deactivated successfully.
Oct  9 09:36:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Oct  9 09:36:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  9 09:36:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:01 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:01 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Oct  9 09:36:01 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Oct  9 09:36:01 compute-0 podman[22589]: 2025-10-09 09:36:01.444176227 +0000 UTC m=+0.025469384 container create b508f70e8384d43ad9b8e6f2741e880364ca101ea3e66bd751df215d2891bc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:01 compute-0 systemd[1]: Started libpod-conmon-b508f70e8384d43ad9b8e6f2741e880364ca101ea3e66bd751df215d2891bc20.scope.
Oct  9 09:36:01 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:01 compute-0 podman[22589]: 2025-10-09 09:36:01.496940374 +0000 UTC m=+0.078233551 container init b508f70e8384d43ad9b8e6f2741e880364ca101ea3e66bd751df215d2891bc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_grothendieck, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:01 compute-0 podman[22589]: 2025-10-09 09:36:01.50293691 +0000 UTC m=+0.084230057 container start b508f70e8384d43ad9b8e6f2741e880364ca101ea3e66bd751df215d2891bc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_grothendieck, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:01 compute-0 podman[22589]: 2025-10-09 09:36:01.504264434 +0000 UTC m=+0.085557601 container attach b508f70e8384d43ad9b8e6f2741e880364ca101ea3e66bd751df215d2891bc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:36:01 compute-0 strange_grothendieck[22603]: 167 167
Oct  9 09:36:01 compute-0 systemd[1]: libpod-b508f70e8384d43ad9b8e6f2741e880364ca101ea3e66bd751df215d2891bc20.scope: Deactivated successfully.
Oct  9 09:36:01 compute-0 podman[22589]: 2025-10-09 09:36:01.506215307 +0000 UTC m=+0.087508464 container died b508f70e8384d43ad9b8e6f2741e880364ca101ea3e66bd751df215d2891bc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 09:36:01 compute-0 podman[22589]: 2025-10-09 09:36:01.526599894 +0000 UTC m=+0.107893051 container remove b508f70e8384d43ad9b8e6f2741e880364ca101ea3e66bd751df215d2891bc20 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 09:36:01 compute-0 podman[22589]: 2025-10-09 09:36:01.433995485 +0000 UTC m=+0.015288652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:01 compute-0 systemd[1]: libpod-conmon-b508f70e8384d43ad9b8e6f2741e880364ca101ea3e66bd751df215d2891bc20.scope: Deactivated successfully.
Oct  9 09:36:01 compute-0 podman[22674]: 2025-10-09 09:36:01.641087672 +0000 UTC m=+0.027007589 container create f169a518d900801e7447d896fd6110da5f4b3a11cfc85850601684fb61fb4740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 09:36:01 compute-0 systemd[1]: Started libpod-conmon-f169a518d900801e7447d896fd6110da5f4b3a11cfc85850601684fb61fb4740.scope.
Oct  9 09:36:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-91e5db4cf1bee2e29763121f50207ee560994901d1240708938705d09d9c402b-merged.mount: Deactivated successfully.
Oct  9 09:36:01 compute-0 python3[22668]: ansible-ansible.legacy.async_status Invoked with jid=j887663001671.22361 mode=status _async_dir=/root/.ansible_async
Oct  9 09:36:01 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd3f545f422c0bb367cf7bc6858297aea12afd3e5da0b087c7f97607c3b182f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd3f545f422c0bb367cf7bc6858297aea12afd3e5da0b087c7f97607c3b182f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd3f545f422c0bb367cf7bc6858297aea12afd3e5da0b087c7f97607c3b182f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd3f545f422c0bb367cf7bc6858297aea12afd3e5da0b087c7f97607c3b182f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:01 compute-0 podman[22674]: 2025-10-09 09:36:01.69695326 +0000 UTC m=+0.082873177 container init f169a518d900801e7447d896fd6110da5f4b3a11cfc85850601684fb61fb4740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:36:01 compute-0 podman[22674]: 2025-10-09 09:36:01.70184196 +0000 UTC m=+0.087761878 container start f169a518d900801e7447d896fd6110da5f4b3a11cfc85850601684fb61fb4740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 09:36:01 compute-0 podman[22674]: 2025-10-09 09:36:01.703059585 +0000 UTC m=+0.088979502 container attach f169a518d900801e7447d896fd6110da5f4b3a11cfc85850601684fb61fb4740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:36:01 compute-0 podman[22674]: 2025-10-09 09:36:01.630257972 +0000 UTC m=+0.016177909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:01 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:01 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  9 09:36:01 compute-0 ceph-mon[4497]: Deploying daemon osd.2 on compute-2
Oct  9 09:36:01 compute-0 python3[22741]: ansible-ansible.legacy.async_status Invoked with jid=j887663001671.22361 mode=cleanup _async_dir=/root/.ansible_async
Oct  9 09:36:02 compute-0 lvm[22813]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:36:02 compute-0 lvm[22813]: VG ceph_vg0 finished
Oct  9 09:36:02 compute-0 thirsty_dubinsky[22688]: {}
Oct  9 09:36:02 compute-0 systemd[1]: libpod-f169a518d900801e7447d896fd6110da5f4b3a11cfc85850601684fb61fb4740.scope: Deactivated successfully.
Oct  9 09:36:02 compute-0 podman[22674]: 2025-10-09 09:36:02.194702751 +0000 UTC m=+0.580622668 container died f169a518d900801e7447d896fd6110da5f4b3a11cfc85850601684fb61fb4740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct  9 09:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbd3f545f422c0bb367cf7bc6858297aea12afd3e5da0b087c7f97607c3b182f-merged.mount: Deactivated successfully.
Oct  9 09:36:02 compute-0 podman[22674]: 2025-10-09 09:36:02.216573865 +0000 UTC m=+0.602493782 container remove f169a518d900801e7447d896fd6110da5f4b3a11cfc85850601684fb61fb4740 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_dubinsky, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:02 compute-0 systemd[1]: libpod-conmon-f169a518d900801e7447d896fd6110da5f4b3a11cfc85850601684fb61fb4740.scope: Deactivated successfully.
Oct  9 09:36:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:02 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:02 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:02 compute-0 python3[22849]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:36:02 compute-0 podman[22850]: 2025-10-09 09:36:02.421521509 +0000 UTC m=+0.026809982 container create 88bd838c558ca1bf4e6f88c8c3981cb604d47c552993296c038b2f88e8505cf7 (image=quay.io/ceph/ceph:v19, name=reverent_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:02 compute-0 systemd[1]: Started libpod-conmon-88bd838c558ca1bf4e6f88c8c3981cb604d47c552993296c038b2f88e8505cf7.scope.
Oct  9 09:36:02 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d96791f905812116546cbf9adf1422c373ef8d610fa7d980f8676208ecf798b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d96791f905812116546cbf9adf1422c373ef8d610fa7d980f8676208ecf798b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:02 compute-0 podman[22850]: 2025-10-09 09:36:02.483494073 +0000 UTC m=+0.088782566 container init 88bd838c558ca1bf4e6f88c8c3981cb604d47c552993296c038b2f88e8505cf7 (image=quay.io/ceph/ceph:v19, name=reverent_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:36:02 compute-0 podman[22850]: 2025-10-09 09:36:02.487972491 +0000 UTC m=+0.093260964 container start 88bd838c558ca1bf4e6f88c8c3981cb604d47c552993296c038b2f88e8505cf7 (image=quay.io/ceph/ceph:v19, name=reverent_wu, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:36:02 compute-0 podman[22850]: 2025-10-09 09:36:02.489273133 +0000 UTC m=+0.094561607 container attach 88bd838c558ca1bf4e6f88c8c3981cb604d47c552993296c038b2f88e8505cf7 (image=quay.io/ceph/ceph:v19, name=reverent_wu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 09:36:02 compute-0 podman[22850]: 2025-10-09 09:36:02.410481337 +0000 UTC m=+0.015769830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:36:02 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 09:36:02 compute-0 reverent_wu[22863]: 
Oct  9 09:36:02 compute-0 reverent_wu[22863]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  9 09:36:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v13: 39 pgs: 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct  9 09:36:02 compute-0 systemd[1]: libpod-88bd838c558ca1bf4e6f88c8c3981cb604d47c552993296c038b2f88e8505cf7.scope: Deactivated successfully.
Oct  9 09:36:02 compute-0 podman[22888]: 2025-10-09 09:36:02.802896572 +0000 UTC m=+0.016174022 container died 88bd838c558ca1bf4e6f88c8c3981cb604d47c552993296c038b2f88e8505cf7 (image=quay.io/ceph/ceph:v19, name=reverent_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d96791f905812116546cbf9adf1422c373ef8d610fa7d980f8676208ecf798b9-merged.mount: Deactivated successfully.
Oct  9 09:36:02 compute-0 podman[22888]: 2025-10-09 09:36:02.821820912 +0000 UTC m=+0.035098362 container remove 88bd838c558ca1bf4e6f88c8c3981cb604d47c552993296c038b2f88e8505cf7 (image=quay.io/ceph/ceph:v19, name=reverent_wu, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:36:02 compute-0 systemd[1]: libpod-conmon-88bd838c558ca1bf4e6f88c8c3981cb604d47c552993296c038b2f88e8505cf7.scope: Deactivated successfully.
Oct  9 09:36:03 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:03 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:03 compute-0 python3[22925]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:36:03 compute-0 podman[22926]: 2025-10-09 09:36:03.524767191 +0000 UTC m=+0.027707275 container create 9173fccfb0b50629c0a0521b243966d1a156f329d3764f86c87d46eccc55da65 (image=quay.io/ceph/ceph:v19, name=intelligent_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct  9 09:36:03 compute-0 systemd[1]: Started libpod-conmon-9173fccfb0b50629c0a0521b243966d1a156f329d3764f86c87d46eccc55da65.scope.
Oct  9 09:36:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e3007a4daaf85dfc1ba27220071a1f170908f591be7582d8bc102f7300fcba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e3007a4daaf85dfc1ba27220071a1f170908f591be7582d8bc102f7300fcba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:03 compute-0 podman[22926]: 2025-10-09 09:36:03.579062793 +0000 UTC m=+0.082002907 container init 9173fccfb0b50629c0a0521b243966d1a156f329d3764f86c87d46eccc55da65 (image=quay.io/ceph/ceph:v19, name=intelligent_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 09:36:03 compute-0 podman[22926]: 2025-10-09 09:36:03.583523918 +0000 UTC m=+0.086464001 container start 9173fccfb0b50629c0a0521b243966d1a156f329d3764f86c87d46eccc55da65 (image=quay.io/ceph/ceph:v19, name=intelligent_hoover, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:03 compute-0 podman[22926]: 2025-10-09 09:36:03.58477676 +0000 UTC m=+0.087716842 container attach 9173fccfb0b50629c0a0521b243966d1a156f329d3764f86c87d46eccc55da65 (image=quay.io/ceph/ceph:v19, name=intelligent_hoover, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 09:36:03 compute-0 podman[22926]: 2025-10-09 09:36:03.51292017 +0000 UTC m=+0.015860273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:36:03 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.24257 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 09:36:03 compute-0 intelligent_hoover[22938]: 
Oct  9 09:36:03 compute-0 intelligent_hoover[22938]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct  9 09:36:03 compute-0 systemd[1]: libpod-9173fccfb0b50629c0a0521b243966d1a156f329d3764f86c87d46eccc55da65.scope: Deactivated successfully.
Oct  9 09:36:03 compute-0 conmon[22938]: conmon 9173fccfb0b50629c0a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9173fccfb0b50629c0a0521b243966d1a156f329d3764f86c87d46eccc55da65.scope/container/memory.events
Oct  9 09:36:03 compute-0 podman[22964]: 2025-10-09 09:36:03.892623422 +0000 UTC m=+0.015578175 container died 9173fccfb0b50629c0a0521b243966d1a156f329d3764f86c87d46eccc55da65 (image=quay.io/ceph/ceph:v19, name=intelligent_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-47e3007a4daaf85dfc1ba27220071a1f170908f591be7582d8bc102f7300fcba-merged.mount: Deactivated successfully.
Oct  9 09:36:03 compute-0 podman[22964]: 2025-10-09 09:36:03.90917338 +0000 UTC m=+0.032128113 container remove 9173fccfb0b50629c0a0521b243966d1a156f329d3764f86c87d46eccc55da65 (image=quay.io/ceph/ceph:v19, name=intelligent_hoover, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:03 compute-0 systemd[1]: libpod-conmon-9173fccfb0b50629c0a0521b243966d1a156f329d3764f86c87d46eccc55da65.scope: Deactivated successfully.
Oct  9 09:36:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:36:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:36:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:04 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:04 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:04 compute-0 python3[23001]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:36:04 compute-0 podman[23002]: 2025-10-09 09:36:04.702033311 +0000 UTC m=+0.024565598 container create e6e02fe045f36f4591de7f0fa65a71188e4d4691b949a54c0613e8303cea11e5 (image=quay.io/ceph/ceph:v19, name=zealous_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:36:04 compute-0 systemd[1]: Started libpod-conmon-e6e02fe045f36f4591de7f0fa65a71188e4d4691b949a54c0613e8303cea11e5.scope.
Oct  9 09:36:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee9c748ae2f36ae072e8c5fc27c31ed949700ee603b8f546eed74536ee90a26/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee9c748ae2f36ae072e8c5fc27c31ed949700ee603b8f546eed74536ee90a26/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-0 podman[23002]: 2025-10-09 09:36:04.747725883 +0000 UTC m=+0.070258191 container init e6e02fe045f36f4591de7f0fa65a71188e4d4691b949a54c0613e8303cea11e5 (image=quay.io/ceph/ceph:v19, name=zealous_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:04 compute-0 podman[23002]: 2025-10-09 09:36:04.752325366 +0000 UTC m=+0.074857652 container start e6e02fe045f36f4591de7f0fa65a71188e4d4691b949a54c0613e8303cea11e5 (image=quay.io/ceph/ceph:v19, name=zealous_lovelace, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:04 compute-0 podman[23002]: 2025-10-09 09:36:04.75331998 +0000 UTC m=+0.075852268 container attach e6e02fe045f36f4591de7f0fa65a71188e4d4691b949a54c0613e8303cea11e5 (image=quay.io/ceph/ceph:v19, name=zealous_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v14: 39 pgs: 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct  9 09:36:04 compute-0 podman[23002]: 2025-10-09 09:36:04.691787353 +0000 UTC m=+0.014319660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:36:05 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  9 09:36:05 compute-0 zealous_lovelace[23014]: 
Oct  9 09:36:05 compute-0 zealous_lovelace[23014]: [{"container_id": "69e1dc759038", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.12%", "created": "2025-10-09T09:34:09.817011Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-09T09:35:52.187210Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2025-10-09T09:34:09.755610Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@crash.compute-0", "version": "19.2.3"}, {"container_id": "cafaadfcff4f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.39%", "created": "2025-10-09T09:34:40.234735Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-09T09:35:52.079791Z", "memory_usage": 7817134, "ports": [], "service_name": "crash", "started": "2025-10-09T09:34:40.176924Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@crash.compute-1", "version": "19.2.3"}, {"container_id": "fcd5272d81fa", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.54%", "created": "2025-10-09T09:35:27.147193Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-09T09:35:52.069937Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2025-10-09T09:35:27.073672Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@crash.compute-2", "version": "19.2.3"}, {"container_id": "0223bd04566f", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "32.83%", "created": "2025-10-09T09:33:42.179724Z", "daemon_id": "compute-0.lwqgfy", "daemon_name": "mgr.compute-0.lwqgfy", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-09T09:35:52.187096Z", "memory_usage": 540331212, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-09T09:33:42.117684Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@mgr.compute-0.lwqgfy", "version": "19.2.3"}, {"container_id": "d27f3e957991", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "82.14%", "created": "2025-10-09T09:35:25.914201Z", "daemon_id": "compute-1.etokpp", "daemon_name": "mgr.compute-1.etokpp", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-09T09:35:52.080020Z", "memory_usage": 503735910, "ports": [8765], "service_name": "mgr", "started": "2025-10-09T09:35:25.841822Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@mgr.compute-1.etokpp", "version": "19.2.3"}, {"container_id": "ac1c41ea23aa", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "70.41%", "created": "2025-10-09T09:35:20.661116Z", "daemon_id": "compute-2.takdnm", "daemon_name": "mgr.compute-2.takdnm", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-09T09:35:52.069868Z", "memory_usage": 504260198, "ports": [8765], "service_name": "mgr", "started": "2025-10-09T09:35:20.590192Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@mgr.compute-2.takdnm", "version": "19.2.3"}, {"container_id": "fb4b20d7f49f", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "1.68%", "created": "2025-10-09T09:33:39.663698Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-09T09:35:52.186999Z", "memory_request": 2147483648, "memory_usage": 55375298, "ports": [], "service_name": "mon", "started": "2025-10-09T09:33:40.908398Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@mon.compute-0", "version": "19.2.3"}, {"container_id": "e3c4abd37c3e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.11%", "created": "2025-10-09T09:35:15.560095Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-10-09T09:35:52.079955Z", "memory_request": 2147483648, "memory_usage": 39636172, "ports": [], "service_name": "mon", "started": "2025-10-09T09:35:15.497536Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@mon.compute-1", "version": "19.2.3"}, {"container_id": "3269fa105124", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.58%", "created": "2025-10-09T09:35:14.329048Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-10-09T09:35:52.069777Z", "memory_request": 2147483648, "memory_usage": 40265318, "ports": [], "service_name": "mon", "started": "2025-10-09T09:35:14.262156Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@mon.compute-2", "version": "19.2.3"}, {"container_id": "f6c5e5aaa66e", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80"
Oct  9 09:36:05 compute-0 systemd[1]: libpod-e6e02fe045f36f4591de7f0fa65a71188e4d4691b949a54c0613e8303cea11e5.scope: Deactivated successfully.
Oct  9 09:36:05 compute-0 conmon[23014]: conmon e6e02fe045f36f4591de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6e02fe045f36f4591de7f0fa65a71188e4d4691b949a54c0613e8303cea11e5.scope/container/memory.events
Oct  9 09:36:05 compute-0 podman[23002]: 2025-10-09 09:36:05.025662698 +0000 UTC m=+0.348194984 container died e6e02fe045f36f4591de7f0fa65a71188e4d4691b949a54c0613e8303cea11e5 (image=quay.io/ceph/ceph:v19, name=zealous_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dee9c748ae2f36ae072e8c5fc27c31ed949700ee603b8f546eed74536ee90a26-merged.mount: Deactivated successfully.
Oct  9 09:36:05 compute-0 podman[23002]: 2025-10-09 09:36:05.043360456 +0000 UTC m=+0.365892743 container remove e6e02fe045f36f4591de7f0fa65a71188e4d4691b949a54c0613e8303cea11e5 (image=quay.io/ceph/ceph:v19, name=zealous_lovelace, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:05 compute-0 systemd[1]: libpod-conmon-e6e02fe045f36f4591de7f0fa65a71188e4d4691b949a54c0613e8303cea11e5.scope: Deactivated successfully.
Oct  9 09:36:05 compute-0 rsyslogd[1243]: message too long (11806) with configured size 8096, begin of message is: [{"container_id": "69e1dc759038", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  9 09:36:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:36:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:36:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:05 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 6f20d962-62a2-4877-88de-1787a5630690 (Updating rgw.rgw deployment (+3 -> 3))
Oct  9 09:36:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 09:36:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct  9 09:36:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:05 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:05 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.mbbcec on compute-2
Oct  9 09:36:05 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.mbbcec on compute-2
Oct  9 09:36:05 compute-0 ansible-async_wrapper.py[22373]: Done in kid B.
Oct  9 09:36:05 compute-0 python3[23074]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:36:05 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 6 completed events
Oct  9 09:36:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:36:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:05 compute-0 podman[23075]: 2025-10-09 09:36:05.842265254 +0000 UTC m=+0.024387003 container create b6629b5d23277c5a7ac0db9714af6db09773369c7bacbb5e79bb46f1a17ef8de (image=quay.io/ceph/ceph:v19, name=objective_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:36:05 compute-0 systemd[1]: Started libpod-conmon-b6629b5d23277c5a7ac0db9714af6db09773369c7bacbb5e79bb46f1a17ef8de.scope.
Oct  9 09:36:05 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e468ccf56f239e4acbc9a02cfa52c09211636e69a58a0ee14f5f1673735a3d01/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e468ccf56f239e4acbc9a02cfa52c09211636e69a58a0ee14f5f1673735a3d01/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:05 compute-0 podman[23075]: 2025-10-09 09:36:05.903058138 +0000 UTC m=+0.085179887 container init b6629b5d23277c5a7ac0db9714af6db09773369c7bacbb5e79bb46f1a17ef8de (image=quay.io/ceph/ceph:v19, name=objective_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 09:36:05 compute-0 podman[23075]: 2025-10-09 09:36:05.906976056 +0000 UTC m=+0.089097804 container start b6629b5d23277c5a7ac0db9714af6db09773369c7bacbb5e79bb46f1a17ef8de (image=quay.io/ceph/ceph:v19, name=objective_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 09:36:05 compute-0 podman[23075]: 2025-10-09 09:36:05.908259666 +0000 UTC m=+0.090381414 container attach b6629b5d23277c5a7ac0db9714af6db09773369c7bacbb5e79bb46f1a17ef8de (image=quay.io/ceph/ceph:v19, name=objective_bartik, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:36:05 compute-0 podman[23075]: 2025-10-09 09:36:05.832945863 +0000 UTC m=+0.015067631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:36:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Oct  9 09:36:06 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036627890' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  9 09:36:06 compute-0 objective_bartik[23087]: 
Oct  9 09:36:06 compute-0 objective_bartik[23087]: {"fsid":"286f8bf0-da72-5823-9a4e-ac4457d9e609","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":41,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":2,"osd_up_since":1760002494,"num_in_osds":3,"osd_in_since":1760002528,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":39}],"num_pgs":39,"num_pools":8,"num_objects":3,"data_bytes":459280,"bytes_used":56107008,"bytes_avail":42885177344,"bytes_total":42941284352,"read_bytes_sec":18507,"write_bytes_sec":0,"read_op_per_sec":6,"write_op_per_sec":1},"fsmap":{"epoch":2,"btime":"2025-10-09T09:35:51:790448+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":2,"modified":"2025-10-09T09:34:58.583309+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct  9 09:36:06 compute-0 systemd[1]: libpod-b6629b5d23277c5a7ac0db9714af6db09773369c7bacbb5e79bb46f1a17ef8de.scope: Deactivated successfully.
Oct  9 09:36:06 compute-0 podman[23075]: 2025-10-09 09:36:06.234521215 +0000 UTC m=+0.416642973 container died b6629b5d23277c5a7ac0db9714af6db09773369c7bacbb5e79bb46f1a17ef8de (image=quay.io/ceph/ceph:v19, name=objective_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:36:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e468ccf56f239e4acbc9a02cfa52c09211636e69a58a0ee14f5f1673735a3d01-merged.mount: Deactivated successfully.
Oct  9 09:36:06 compute-0 podman[23075]: 2025-10-09 09:36:06.255328777 +0000 UTC m=+0.437450525 container remove b6629b5d23277c5a7ac0db9714af6db09773369c7bacbb5e79bb46f1a17ef8de (image=quay.io/ceph/ceph:v19, name=objective_bartik, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 09:36:06 compute-0 systemd[1]: libpod-conmon-b6629b5d23277c5a7ac0db9714af6db09773369c7bacbb5e79bb46f1a17ef8de.scope: Deactivated successfully.
Oct  9 09:36:06 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:06 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:06 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-0 ceph-mon[4497]: Deploying daemon rgw.rgw.compute-2.mbbcec on compute-2
Oct  9 09:36:06 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:36:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:36:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 09:36:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 09:36:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct  9 09:36:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:06 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:06 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.fxnvnn on compute-1
Oct  9 09:36:06 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.fxnvnn on compute-1
Oct  9 09:36:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Oct  9 09:36:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  9 09:36:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v15: 39 pgs: 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s
Oct  9 09:36:07 compute-0 python3[23149]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:36:07 compute-0 podman[23150]: 2025-10-09 09:36:07.10650894 +0000 UTC m=+0.029054443 container create 4bf4a417ba1a87671b96f73004c9b981a16fd43ef153348f9ad737efe001465b (image=quay.io/ceph/ceph:v19, name=friendly_tharp, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:36:07 compute-0 systemd[1]: Started libpod-conmon-4bf4a417ba1a87671b96f73004c9b981a16fd43ef153348f9ad737efe001465b.scope.
Oct  9 09:36:07 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3def7f973c15a8d316725d9196102575731e287712a1a40d62df0879a87a780/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3def7f973c15a8d316725d9196102575731e287712a1a40d62df0879a87a780/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:07 compute-0 podman[23150]: 2025-10-09 09:36:07.161814307 +0000 UTC m=+0.084359810 container init 4bf4a417ba1a87671b96f73004c9b981a16fd43ef153348f9ad737efe001465b (image=quay.io/ceph/ceph:v19, name=friendly_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 09:36:07 compute-0 podman[23150]: 2025-10-09 09:36:07.16631901 +0000 UTC m=+0.088864513 container start 4bf4a417ba1a87671b96f73004c9b981a16fd43ef153348f9ad737efe001465b (image=quay.io/ceph/ceph:v19, name=friendly_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:36:07 compute-0 podman[23150]: 2025-10-09 09:36:07.1674536 +0000 UTC m=+0.089999103 container attach 4bf4a417ba1a87671b96f73004c9b981a16fd43ef153348f9ad737efe001465b (image=quay.io/ceph/ceph:v19, name=friendly_tharp, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:07 compute-0 podman[23150]: 2025-10-09 09:36:07.095244932 +0000 UTC m=+0.017790455 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:36:07 compute-0 friendly_tharp[23162]: 
Oct  9 09:36:07 compute-0 friendly_tharp[23162]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard//server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.lwqgfy/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.takdnm/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5503523225","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-1.fxnvnn","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.mbbcec","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct  9 09:36:07 compute-0 systemd[1]: libpod-4bf4a417ba1a87671b96f73004c9b981a16fd43ef153348f9ad737efe001465b.scope: Deactivated successfully.
Oct  9 09:36:07 compute-0 podman[23150]: 2025-10-09 09:36:07.449190829 +0000 UTC m=+0.371736363 container died 4bf4a417ba1a87671b96f73004c9b981a16fd43ef153348f9ad737efe001465b (image=quay.io/ceph/ceph:v19, name=friendly_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3def7f973c15a8d316725d9196102575731e287712a1a40d62df0879a87a780-merged.mount: Deactivated successfully.
Oct  9 09:36:07 compute-0 podman[23150]: 2025-10-09 09:36:07.467386166 +0000 UTC m=+0.389931669 container remove 4bf4a417ba1a87671b96f73004c9b981a16fd43ef153348f9ad737efe001465b (image=quay.io/ceph/ceph:v19, name=friendly_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:07 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:07 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:07 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-0 ceph-mon[4497]: Deploying daemon rgw.rgw.compute-1.fxnvnn on compute-1
Oct  9 09:36:07 compute-0 ceph-mon[4497]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Oct  9 09:36:07 compute-0 systemd[1]: libpod-conmon-4bf4a417ba1a87671b96f73004c9b981a16fd43ef153348f9ad737efe001465b.scope: Deactivated successfully.
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Oct  9 09:36:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 32 pg[9.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e32 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:36:07 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:07 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:07 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.yciajn on compute-0
Oct  9 09:36:07 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.yciajn on compute-0
Oct  9 09:36:07 compute-0 podman[23281]: 2025-10-09 09:36:07.996758627 +0000 UTC m=+0.025287480 container create f4b870ee6ee800ab23d61fdc7b319b96f2cb53d25934f5322834c3aa27a2f556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 09:36:08 compute-0 systemd[1]: Started libpod-conmon-f4b870ee6ee800ab23d61fdc7b319b96f2cb53d25934f5322834c3aa27a2f556.scope.
Oct  9 09:36:08 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:08 compute-0 podman[23281]: 2025-10-09 09:36:08.041009691 +0000 UTC m=+0.069538554 container init f4b870ee6ee800ab23d61fdc7b319b96f2cb53d25934f5322834c3aa27a2f556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:08 compute-0 podman[23281]: 2025-10-09 09:36:08.04517758 +0000 UTC m=+0.073706433 container start f4b870ee6ee800ab23d61fdc7b319b96f2cb53d25934f5322834c3aa27a2f556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 09:36:08 compute-0 beautiful_chatelet[23295]: 167 167
Oct  9 09:36:08 compute-0 systemd[1]: libpod-f4b870ee6ee800ab23d61fdc7b319b96f2cb53d25934f5322834c3aa27a2f556.scope: Deactivated successfully.
Oct  9 09:36:08 compute-0 podman[23281]: 2025-10-09 09:36:08.046900829 +0000 UTC m=+0.075429712 container attach f4b870ee6ee800ab23d61fdc7b319b96f2cb53d25934f5322834c3aa27a2f556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 09:36:08 compute-0 podman[23281]: 2025-10-09 09:36:08.048240445 +0000 UTC m=+0.076769298 container died f4b870ee6ee800ab23d61fdc7b319b96f2cb53d25934f5322834c3aa27a2f556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:36:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e60ec39ed1f3f753cd46af647e968c1c9ccd129cb6930fcbc985e7bd92ffb54-merged.mount: Deactivated successfully.
Oct  9 09:36:08 compute-0 podman[23281]: 2025-10-09 09:36:08.069020125 +0000 UTC m=+0.097548977 container remove f4b870ee6ee800ab23d61fdc7b319b96f2cb53d25934f5322834c3aa27a2f556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:36:08 compute-0 podman[23281]: 2025-10-09 09:36:07.986349941 +0000 UTC m=+0.014878804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:08 compute-0 systemd[1]: libpod-conmon-f4b870ee6ee800ab23d61fdc7b319b96f2cb53d25934f5322834c3aa27a2f556.scope: Deactivated successfully.
Oct  9 09:36:08 compute-0 systemd[1]: Reloading.
Oct  9 09:36:08 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:08 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:08 compute-0 python3[23337]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:36:08 compute-0 podman[23372]: 2025-10-09 09:36:08.254600171 +0000 UTC m=+0.027883855 container create c86544ec5bd076d1213131a020d0d041bf30ba129f99c0e928fbd7b9dff2b8e4 (image=quay.io/ceph/ceph:v19, name=tender_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:36:08 compute-0 systemd[1]: Started libpod-conmon-c86544ec5bd076d1213131a020d0d041bf30ba129f99c0e928fbd7b9dff2b8e4.scope.
Oct  9 09:36:08 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b098585149de1f4ff9b8fec55ec52e4dd12632d688df26e551dd5b2f9bb70716/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b098585149de1f4ff9b8fec55ec52e4dd12632d688df26e551dd5b2f9bb70716/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:08 compute-0 podman[23372]: 2025-10-09 09:36:08.327093666 +0000 UTC m=+0.100377360 container init c86544ec5bd076d1213131a020d0d041bf30ba129f99c0e928fbd7b9dff2b8e4 (image=quay.io/ceph/ceph:v19, name=tender_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:36:08 compute-0 systemd[1]: Reloading.
Oct  9 09:36:08 compute-0 podman[23372]: 2025-10-09 09:36:08.338193024 +0000 UTC m=+0.111476708 container start c86544ec5bd076d1213131a020d0d041bf30ba129f99c0e928fbd7b9dff2b8e4 (image=quay.io/ceph/ceph:v19, name=tender_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 09:36:08 compute-0 podman[23372]: 2025-10-09 09:36:08.243070752 +0000 UTC m=+0.016354437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:36:08 compute-0 podman[23372]: 2025-10-09 09:36:08.340329513 +0000 UTC m=+0.113613196 container attach c86544ec5bd076d1213131a020d0d041bf30ba129f99c0e928fbd7b9dff2b8e4 (image=quay.io/ceph/ceph:v19, name=tender_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 09:36:08 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:08 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.069715500s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active pruub 91.083114624s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.069715500s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083114624s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-0 ceph-mon[4497]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  9 09:36:08 compute-0 ceph-mon[4497]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  9 09:36:08 compute-0 ceph-mon[4497]: from='client.? 192.168.122.102:0/573248088' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 09:36:08 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 09:36:08 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:08 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:08 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.13( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.069585800s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active pruub 91.083030701s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.13( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.069585800s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083030701s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.10( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.069499016s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active pruub 91.083114624s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4056276867; not ready for session (expect reconnect)
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.10( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.069499016s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083114624s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[5.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=33 pruub=11.016321182s) [] r=-1 lpr=33 pi=[13,33)/1 crt=0'0 mlcod 0'0 active pruub 89.029991150s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[5.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=33 pruub=11.016321182s) [] r=-1 lpr=33 pi=[13,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.029991150s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.069240570s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active pruub 91.083000183s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.068908691s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active pruub 91.082687378s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.069240570s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083000183s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.068908691s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.082687378s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.069106102s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active pruub 91.082954407s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.069106102s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.082954407s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.c( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.068817139s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 active pruub 91.082695007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[2.c( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=33 pruub=13.068817139s) [] r=-1 lpr=33 pi=[22,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.082695007s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=33 pruub=8.999398232s) [] r=-1 lpr=33 pi=[11,33)/1 crt=0'0 mlcod 0'0 active pruub 87.014167786s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=33 pruub=8.999398232s) [] r=-1 lpr=33 pi=[11,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.014167786s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:36:08 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.yciajn for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3729780142' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct  9 09:36:08 compute-0 tender_mclean[23386]: mimic
Oct  9 09:36:08 compute-0 systemd[1]: libpod-c86544ec5bd076d1213131a020d0d041bf30ba129f99c0e928fbd7b9dff2b8e4.scope: Deactivated successfully.
Oct  9 09:36:08 compute-0 podman[23372]: 2025-10-09 09:36:08.65367072 +0000 UTC m=+0.426954405 container died c86544ec5bd076d1213131a020d0d041bf30ba129f99c0e928fbd7b9dff2b8e4 (image=quay.io/ceph/ceph:v19, name=tender_mclean, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:36:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b098585149de1f4ff9b8fec55ec52e4dd12632d688df26e551dd5b2f9bb70716-merged.mount: Deactivated successfully.
Oct  9 09:36:08 compute-0 podman[23372]: 2025-10-09 09:36:08.675608784 +0000 UTC m=+0.448892468 container remove c86544ec5bd076d1213131a020d0d041bf30ba129f99c0e928fbd7b9dff2b8e4 (image=quay.io/ceph/ceph:v19, name=tender_mclean, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:08 compute-0 systemd[1]: libpod-conmon-c86544ec5bd076d1213131a020d0d041bf30ba129f99c0e928fbd7b9dff2b8e4.scope: Deactivated successfully.
Oct  9 09:36:08 compute-0 podman[23499]: 2025-10-09 09:36:08.705250534 +0000 UTC m=+0.028254034 container create 401a76b2a1228f9d050b70960b3719d71c80520cd347da022170084fe08c866a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-0-yciajn, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e88dba24336c31a4e290ee6b21a6e841cb401c0001c2d02f294086ec2bdc35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e88dba24336c31a4e290ee6b21a6e841cb401c0001c2d02f294086ec2bdc35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e88dba24336c31a4e290ee6b21a6e841cb401c0001c2d02f294086ec2bdc35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e88dba24336c31a4e290ee6b21a6e841cb401c0001c2d02f294086ec2bdc35/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.yciajn supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:08 compute-0 podman[23499]: 2025-10-09 09:36:08.748390803 +0000 UTC m=+0.071394303 container init 401a76b2a1228f9d050b70960b3719d71c80520cd347da022170084fe08c866a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-0-yciajn, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:36:08 compute-0 podman[23499]: 2025-10-09 09:36:08.754287962 +0000 UTC m=+0.077291452 container start 401a76b2a1228f9d050b70960b3719d71c80520cd347da022170084fe08c866a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-0-yciajn, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:36:08 compute-0 bash[23499]: 401a76b2a1228f9d050b70960b3719d71c80520cd347da022170084fe08c866a
Oct  9 09:36:08 compute-0 podman[23499]: 2025-10-09 09:36:08.693792199 +0000 UTC m=+0.016795709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:08 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.yciajn for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v18: 40 pgs: 1 unknown, 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Oct  9 09:36:08 compute-0 radosgw[23518]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:36:08 compute-0 radosgw[23518]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct  9 09:36:08 compute-0 radosgw[23518]: framework: beast
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:08 compute-0 radosgw[23518]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct  9 09:36:08 compute-0 radosgw[23518]: init_numa not setting numa affinity
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 6f20d962-62a2-4877-88de-1787a5630690 (Updating rgw.rgw deployment (+3 -> 3))
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 6f20d962-62a2-4877-88de-1787a5630690 (Updating rgw.rgw deployment (+3 -> 3)) in 3 seconds
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev fd39dbf1-5c09-4ff2-af4a-0c4013715fa8 (Updating mds.cephfs deployment (+3 -> 3))
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:08 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.zfggbi on compute-2
Oct  9 09:36:08 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.zfggbi on compute-2
Oct  9 09:36:09 compute-0 python3[24132]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:36:09 compute-0 podman[24133]: 2025-10-09 09:36:09.453429088 +0000 UTC m=+0.028926752 container create 378ffb4785d477741ad249e6d5a3e525eb25b4184315c33d7e0d4dbf76ff9c20 (image=quay.io/ceph/ceph:v19, name=sleepy_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:36:09 compute-0 systemd[1]: Started libpod-conmon-378ffb4785d477741ad249e6d5a3e525eb25b4184315c33d7e0d4dbf76ff9c20.scope.
Oct  9 09:36:09 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:09 compute-0 ceph-mgr[4772]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4056276867; not ready for session (expect reconnect)
Oct  9 09:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a24ab8ac0b13e0d65fab19734ca67e4ff30ab53469943da038d171ad57ba4fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a24ab8ac0b13e0d65fab19734ca67e4ff30ab53469943da038d171ad57ba4fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct  9 09:36:09 compute-0 ceph-mon[4497]: Deploying daemon rgw.rgw.compute-0.yciajn on compute-0
Oct  9 09:36:09 compute-0 ceph-mon[4497]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct  9 09:36:09 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  9 09:36:09 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:09 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:36:09 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:36:09 compute-0 ceph-mgr[4772]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  9 09:36:09 compute-0 podman[24133]: 2025-10-09 09:36:09.511403297 +0000 UTC m=+0.086900971 container init 378ffb4785d477741ad249e6d5a3e525eb25b4184315c33d7e0d4dbf76ff9c20 (image=quay.io/ceph/ceph:v19, name=sleepy_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct  9 09:36:09 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867] boot
Oct  9 09:36:09 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct  9 09:36:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:36:09 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:36:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct  9 09:36:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct  9 09:36:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct  9 09:36:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:09 compute-0 podman[24133]: 2025-10-09 09:36:09.517199957 +0000 UTC m=+0.092697620 container start 378ffb4785d477741ad249e6d5a3e525eb25b4184315c33d7e0d4dbf76ff9c20 (image=quay.io/ceph/ceph:v19, name=sleepy_lumiere, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Oct  9 09:36:09 compute-0 podman[24133]: 2025-10-09 09:36:09.518502072 +0000 UTC m=+0.093999746 container attach 378ffb4785d477741ad249e6d5a3e525eb25b4184315c33d7e0d4dbf76ff9c20 (image=quay.io/ceph/ceph:v19, name=sleepy_lumiere, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:36:09 compute-0 podman[24133]: 2025-10-09 09:36:09.442503769 +0000 UTC m=+0.018001453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:36:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Oct  9 09:36:09 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2574318436' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct  9 09:36:09 compute-0 sleepy_lumiere[24145]: 
Oct  9 09:36:09 compute-0 systemd[1]: libpod-378ffb4785d477741ad249e6d5a3e525eb25b4184315c33d7e0d4dbf76ff9c20.scope: Deactivated successfully.
Oct  9 09:36:09 compute-0 sleepy_lumiere[24145]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":9}}
Oct  9 09:36:09 compute-0 podman[24170]: 2025-10-09 09:36:09.881230549 +0000 UTC m=+0.016588639 container died 378ffb4785d477741ad249e6d5a3e525eb25b4184315c33d7e0d4dbf76ff9c20 (image=quay.io/ceph/ceph:v19, name=sleepy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a24ab8ac0b13e0d65fab19734ca67e4ff30ab53469943da038d171ad57ba4fd-merged.mount: Deactivated successfully.
Oct  9 09:36:09 compute-0 podman[24170]: 2025-10-09 09:36:09.900223149 +0000 UTC m=+0.035581219 container remove 378ffb4785d477741ad249e6d5a3e525eb25b4184315c33d7e0d4dbf76ff9c20 (image=quay.io/ceph/ceph:v19, name=sleepy_lumiere, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:36:09 compute-0 systemd[1]: libpod-conmon-378ffb4785d477741ad249e6d5a3e525eb25b4184315c33d7e0d4dbf76ff9c20.scope: Deactivated successfully.
Oct  9 09:36:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:36:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:10 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.wjwyle on compute-0
Oct  9 09:36:10 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.wjwyle on compute-0
Oct  9 09:36:10 compute-0 podman[24267]: 2025-10-09 09:36:10.379361184 +0000 UTC m=+0.028135039 container create c15c6d8a5ed6e03ea0e1e35927ef6a58bfe10acb8e8c90b7008b21d54e58e873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:36:10 compute-0 systemd[1]: Started libpod-conmon-c15c6d8a5ed6e03ea0e1e35927ef6a58bfe10acb8e8c90b7008b21d54e58e873.scope.
Oct  9 09:36:10 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:10 compute-0 podman[24267]: 2025-10-09 09:36:10.424512265 +0000 UTC m=+0.073286130 container init c15c6d8a5ed6e03ea0e1e35927ef6a58bfe10acb8e8c90b7008b21d54e58e873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_pasteur, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:10 compute-0 podman[24267]: 2025-10-09 09:36:10.428707695 +0000 UTC m=+0.077481551 container start c15c6d8a5ed6e03ea0e1e35927ef6a58bfe10acb8e8c90b7008b21d54e58e873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_pasteur, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:10 compute-0 podman[24267]: 2025-10-09 09:36:10.429981027 +0000 UTC m=+0.078754902 container attach c15c6d8a5ed6e03ea0e1e35927ef6a58bfe10acb8e8c90b7008b21d54e58e873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:36:10 compute-0 dazzling_pasteur[24280]: 167 167
Oct  9 09:36:10 compute-0 systemd[1]: libpod-c15c6d8a5ed6e03ea0e1e35927ef6a58bfe10acb8e8c90b7008b21d54e58e873.scope: Deactivated successfully.
Oct  9 09:36:10 compute-0 conmon[24280]: conmon c15c6d8a5ed6e03ea0e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c15c6d8a5ed6e03ea0e1e35927ef6a58bfe10acb8e8c90b7008b21d54e58e873.scope/container/memory.events
Oct  9 09:36:10 compute-0 podman[24267]: 2025-10-09 09:36:10.432869643 +0000 UTC m=+0.081643508 container died c15c6d8a5ed6e03ea0e1e35927ef6a58bfe10acb8e8c90b7008b21d54e58e873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  9 09:36:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a55e787b09dd0f7a7ee585e736e4cc8a6932fb16a7b36891d9558d17e7721a8-merged.mount: Deactivated successfully.
Oct  9 09:36:10 compute-0 podman[24267]: 2025-10-09 09:36:10.450993255 +0000 UTC m=+0.099767110 container remove c15c6d8a5ed6e03ea0e1e35927ef6a58bfe10acb8e8c90b7008b21d54e58e873 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_pasteur, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:10 compute-0 podman[24267]: 2025-10-09 09:36:10.367891008 +0000 UTC m=+0.016664873 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:10 compute-0 systemd[1]: libpod-conmon-c15c6d8a5ed6e03ea0e1e35927ef6a58bfe10acb8e8c90b7008b21d54e58e873.scope: Deactivated successfully.
Oct  9 09:36:10 compute-0 systemd[1]: Reloading.
Oct  9 09:36:10 compute-0 ceph-mon[4497]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 09:36:10 compute-0 ceph-mon[4497]: Deploying daemon mds.cephfs.compute-2.zfggbi on compute-2
Oct  9 09:36:10 compute-0 ceph-mon[4497]: OSD bench result of 22080.768566 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:36:10 compute-0 ceph-mon[4497]: osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867] boot
Oct  9 09:36:10 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-0 ceph-mon[4497]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-0 ceph-mon[4497]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:10 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 34 pg[2.15( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.048747063s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083114624s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 34 pg[2.13( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.048652649s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083030701s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 35 pg[2.15( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.048700333s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083114624s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 34 pg[2.10( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.048687935s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083114624s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 35 pg[2.10( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.048661232s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083114624s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=34 pruub=6.979564667s) [2] r=-1 lpr=34 pi=[11,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.014167786s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=34 pruub=6.979550362s) [2] r=-1 lpr=34 pi=[11,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.014167786s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 34 pg[5.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=34 pruub=8.995253563s) [2] r=-1 lpr=34 pi=[13,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.029991150s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 35 pg[5.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=34 pruub=8.995236397s) [2] r=-1 lpr=34 pi=[13,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.029991150s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 34 pg[2.1b( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.048195839s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083000183s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 34 pg[2.d( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.047855377s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.082687378s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 35 pg[2.1b( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.048165321s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083000183s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 35 pg[2.d( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.047843933s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.082687378s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 34 pg[2.a( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.048038483s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.082954407s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 35 pg[2.a( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.048026085s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.082954407s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 34 pg[2.c( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.047737122s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.082695007s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 35 pg[2.c( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.047724724s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.082695007s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 35 pg[2.13( empty local-lis/les=22/23 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34 pruub=11.048486710s) [2] r=-1 lpr=34 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.083030701s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e3 new map
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-10-09T09:36:10:513915+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:35:51.790428+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.zfggbi{-1:14535} state up:standby seq 1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] up:boot
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] as mds.0
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.zfggbi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.zfggbi"} v 0)
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zfggbi"}]: dispatch
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e3 all = 0
Oct  9 09:36:10 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:10 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e4 new map
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-10-09T09:36:10:526987+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:10.526981+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.zfggbi{0:14535} state up:creating seq 1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:creating}
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.zfggbi is now active in filesystem cephfs as rank 0
Oct  9 09:36:10 compute-0 systemd[1]: Reloading.
Oct  9 09:36:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v21: 41 pgs: 7 peering, 2 unknown, 32 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:10 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:10 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:10 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 7 completed events
Oct  9 09:36:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:36:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-0 ceph-mgr[4772]: [progress WARNING root] Starting Global Recovery Event,9 pgs not in active + clean state
Oct  9 09:36:10 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.wjwyle for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:11 compute-0 podman[24417]: 2025-10-09 09:36:11.087029146 +0000 UTC m=+0.028044869 container create 3f7057b7f8c9b79b0156ff941c7bd82cbbd1db185f6bcd3888376796da46b198 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-0-wjwyle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e1d01a3255126c06c28f1da6e5dc02c0d555966022f66759316b02bd08b24e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e1d01a3255126c06c28f1da6e5dc02c0d555966022f66759316b02bd08b24e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e1d01a3255126c06c28f1da6e5dc02c0d555966022f66759316b02bd08b24e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e1d01a3255126c06c28f1da6e5dc02c0d555966022f66759316b02bd08b24e/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.wjwyle supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:11 compute-0 podman[24417]: 2025-10-09 09:36:11.127131068 +0000 UTC m=+0.068146790 container init 3f7057b7f8c9b79b0156ff941c7bd82cbbd1db185f6bcd3888376796da46b198 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-0-wjwyle, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 09:36:11 compute-0 podman[24417]: 2025-10-09 09:36:11.133154766 +0000 UTC m=+0.074170498 container start 3f7057b7f8c9b79b0156ff941c7bd82cbbd1db185f6bcd3888376796da46b198 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-0-wjwyle, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 09:36:11 compute-0 bash[24417]: 3f7057b7f8c9b79b0156ff941c7bd82cbbd1db185f6bcd3888376796da46b198
Oct  9 09:36:11 compute-0 podman[24417]: 2025-10-09 09:36:11.075944607 +0000 UTC m=+0.016960350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:11 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.wjwyle for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:11 compute-0 ceph-mds[24432]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:36:11 compute-0 ceph-mds[24432]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct  9 09:36:11 compute-0 ceph-mds[24432]: main not setting numa affinity
Oct  9 09:36:11 compute-0 ceph-mds[24432]: pidfile_write: ignore empty --pid-file
Oct  9 09:36:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-0-wjwyle[24428]: starting mds.cephfs.compute-0.wjwyle at 
Oct  9 09:36:11 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Updating MDS map to version 4 from mon.0
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:11 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.svghvn on compute-1
Oct  9 09:36:11 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.svghvn on compute-1
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct  9 09:36:11 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 36 pg[11.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:11 compute-0 ceph-mon[4497]: Deploying daemon mds.cephfs.compute-0.wjwyle on compute-0
Oct  9 09:36:11 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:11 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:11 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:11 compute-0 ceph-mon[4497]: daemon mds.cephfs.compute-2.zfggbi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: Cluster is now healthy
Oct  9 09:36:11 compute-0 ceph-mon[4497]: daemon mds.cephfs.compute-2.zfggbi is now active in filesystem cephfs as rank 0
Oct  9 09:36:11 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:11 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e5 new map
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-10-09T09:36:11:555720+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:11.555718+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:11 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Updating MDS map to version 5 from mon.0
Oct  9 09:36:11 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Monitors have assigned me to become a standby
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] up:active
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] up:boot
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 1 up:standby
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.wjwyle"} v 0)
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjwyle"}]: dispatch
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e5 all = 0
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e6 new map
Oct  9 09:36:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-10-09T09:36:11:561187+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:11.555718+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:11 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 1 up:standby
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev fd39dbf1-5c09-4ff2-af4a-0c4013715fa8 (Updating mds.cephfs deployment (+3 -> 3))
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Oct  9 09:36:12 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event fd39dbf1-5c09-4ff2-af4a-0c4013715fa8 (Updating mds.cephfs deployment (+3 -> 3)) in 4 seconds
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 22db3dbc-9f8b-4ce2-9c79-a55e39621bf8 (Updating alertmanager deployment (+1 -> 1))
Oct  9 09:36:12 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Oct  9 09:36:12 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct  9 09:36:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:12 compute-0 ceph-mon[4497]: Deploying daemon mds.cephfs.compute-1.svghvn on compute-1
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e7 new map
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-10-09T09:36:12:564873+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:11.555718+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] up:boot
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 2 up:standby
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.svghvn"} v 0)
Oct  9 09:36:12 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.svghvn"}]: dispatch
Oct  9 09:36:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e7 all = 0
Oct  9 09:36:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v24: 42 pgs: 1 creating+peering, 7 peering, 34 active+clean; 452 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 5.2 KiB/s wr, 21 op/s
Oct  9 09:36:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct  9 09:36:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct  9 09:36:13 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct  9 09:36:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  9 09:36:13 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  9 09:36:13 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:13 compute-0 ceph-mon[4497]: Deploying daemon alertmanager.compute-0 on compute-0
Oct  9 09:36:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  9 09:36:13 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-0 podman[24543]: 2025-10-09 09:36:14.170898658 +0000 UTC m=+1.431829034 volume create 8aff2e1aa7a4b92c7b16d10cbe8155a345c22b30f2e831335b3a347fa433e220
Oct  9 09:36:14 compute-0 podman[24543]: 2025-10-09 09:36:14.175030799 +0000 UTC m=+1.435961175 container create b92c8450f6033c5332a5d4c9c2ac0d0e2a67ea66fc4dc398b614b864a9a7bfbc (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 systemd[1]: Started libpod-conmon-b92c8450f6033c5332a5d4c9c2ac0d0e2a67ea66fc4dc398b614b864a9a7bfbc.scope.
Oct  9 09:36:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7789db98f5f99644fb031e2e2ef1f5c783028d47b1b42109cb52ab95e4341d7b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:14 compute-0 podman[24543]: 2025-10-09 09:36:14.243074042 +0000 UTC m=+1.504004419 container init b92c8450f6033c5332a5d4c9c2ac0d0e2a67ea66fc4dc398b614b864a9a7bfbc (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 podman[24543]: 2025-10-09 09:36:14.248579493 +0000 UTC m=+1.509509869 container start b92c8450f6033c5332a5d4c9c2ac0d0e2a67ea66fc4dc398b614b864a9a7bfbc (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 podman[24543]: 2025-10-09 09:36:14.249695057 +0000 UTC m=+1.510625432 container attach b92c8450f6033c5332a5d4c9c2ac0d0e2a67ea66fc4dc398b614b864a9a7bfbc (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 loving_grothendieck[24655]: 65534 65534
Oct  9 09:36:14 compute-0 systemd[1]: libpod-b92c8450f6033c5332a5d4c9c2ac0d0e2a67ea66fc4dc398b614b864a9a7bfbc.scope: Deactivated successfully.
Oct  9 09:36:14 compute-0 podman[24543]: 2025-10-09 09:36:14.251019795 +0000 UTC m=+1.511950170 container died b92c8450f6033c5332a5d4c9c2ac0d0e2a67ea66fc4dc398b614b864a9a7bfbc (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 podman[24543]: 2025-10-09 09:36:14.162592066 +0000 UTC m=+1.423522441 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 09:36:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7789db98f5f99644fb031e2e2ef1f5c783028d47b1b42109cb52ab95e4341d7b-merged.mount: Deactivated successfully.
Oct  9 09:36:14 compute-0 podman[24543]: 2025-10-09 09:36:14.269242703 +0000 UTC m=+1.530173079 container remove b92c8450f6033c5332a5d4c9c2ac0d0e2a67ea66fc4dc398b614b864a9a7bfbc (image=quay.io/prometheus/alertmanager:v0.25.0, name=loving_grothendieck, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 podman[24543]: 2025-10-09 09:36:14.271494129 +0000 UTC m=+1.532424505 volume remove 8aff2e1aa7a4b92c7b16d10cbe8155a345c22b30f2e831335b3a347fa433e220
Oct  9 09:36:14 compute-0 systemd[1]: libpod-conmon-b92c8450f6033c5332a5d4c9c2ac0d0e2a67ea66fc4dc398b614b864a9a7bfbc.scope: Deactivated successfully.
Oct  9 09:36:14 compute-0 podman[24670]: 2025-10-09 09:36:14.31028084 +0000 UTC m=+0.024613269 volume create 9dadc29a4e7dd739f228cd2e99b889fecd4fcdd1d3a5252932dd74097a3857d3
Oct  9 09:36:14 compute-0 podman[24670]: 2025-10-09 09:36:14.314157829 +0000 UTC m=+0.028490259 container create ead4c764df65b306d4f318801800dcfadabc06f523519fcdac8480e59457dde0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_mclean, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 systemd[1]: Started libpod-conmon-ead4c764df65b306d4f318801800dcfadabc06f523519fcdac8480e59457dde0.scope.
Oct  9 09:36:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01dd9122b2e919cbb249d02af4e326b8d26ce4927a52ea5403abd02be244e0d5/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:14 compute-0 podman[24670]: 2025-10-09 09:36:14.357990496 +0000 UTC m=+0.072322935 container init ead4c764df65b306d4f318801800dcfadabc06f523519fcdac8480e59457dde0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_mclean, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 podman[24670]: 2025-10-09 09:36:14.362135641 +0000 UTC m=+0.076468080 container start ead4c764df65b306d4f318801800dcfadabc06f523519fcdac8480e59457dde0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_mclean, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 unruffled_mclean[24683]: 65534 65534
Oct  9 09:36:14 compute-0 systemd[1]: libpod-ead4c764df65b306d4f318801800dcfadabc06f523519fcdac8480e59457dde0.scope: Deactivated successfully.
Oct  9 09:36:14 compute-0 podman[24670]: 2025-10-09 09:36:14.364326963 +0000 UTC m=+0.078659382 container attach ead4c764df65b306d4f318801800dcfadabc06f523519fcdac8480e59457dde0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_mclean, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 podman[24670]: 2025-10-09 09:36:14.364483277 +0000 UTC m=+0.078815706 container died ead4c764df65b306d4f318801800dcfadabc06f523519fcdac8480e59457dde0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_mclean, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-01dd9122b2e919cbb249d02af4e326b8d26ce4927a52ea5403abd02be244e0d5-merged.mount: Deactivated successfully.
Oct  9 09:36:14 compute-0 podman[24670]: 2025-10-09 09:36:14.381391237 +0000 UTC m=+0.095723666 container remove ead4c764df65b306d4f318801800dcfadabc06f523519fcdac8480e59457dde0 (image=quay.io/prometheus/alertmanager:v0.25.0, name=unruffled_mclean, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 podman[24670]: 2025-10-09 09:36:14.382543559 +0000 UTC m=+0.096875989 volume remove 9dadc29a4e7dd739f228cd2e99b889fecd4fcdd1d3a5252932dd74097a3857d3
Oct  9 09:36:14 compute-0 podman[24670]: 2025-10-09 09:36:14.30127524 +0000 UTC m=+0.015607689 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 09:36:14 compute-0 systemd[1]: libpod-conmon-ead4c764df65b306d4f318801800dcfadabc06f523519fcdac8480e59457dde0.scope: Deactivated successfully.
Oct  9 09:36:14 compute-0 systemd[1]: Reloading.
Oct  9 09:36:14 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:14 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct  9 09:36:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct  9 09:36:14 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct  9 09:36:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  9 09:36:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  9 09:36:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:14 compute-0 ceph-mon[4497]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-0 ceph-mon[4497]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:14 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:14 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  9 09:36:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:14 compute-0 systemd[1]: Reloading.
Oct  9 09:36:14 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:14 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v27: 43 pgs: 1 unknown, 1 creating+peering, 7 peering, 34 active+clean; 452 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 5.2 KiB/s wr, 21 op/s
Oct  9 09:36:14 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:14 compute-0 podman[24811]: 2025-10-09 09:36:14.954646818 +0000 UTC m=+0.025385264 volume create 9e67def042e827328b0d7fc63b2a678777c6accad0661d2e3494005ce80ceb8a
Oct  9 09:36:14 compute-0 podman[24811]: 2025-10-09 09:36:14.959068616 +0000 UTC m=+0.029807062 container create bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79b46ae12a440a24b8f0d9c8dd9165d4911897c94bc41aced8452688e26b442/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79b46ae12a440a24b8f0d9c8dd9165d4911897c94bc41aced8452688e26b442/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:14 compute-0 podman[24811]: 2025-10-09 09:36:14.999697892 +0000 UTC m=+0.070436337 container init bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:15 compute-0 podman[24811]: 2025-10-09 09:36:15.003241744 +0000 UTC m=+0.073980190 container start bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:15 compute-0 bash[24811]: bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1
Oct  9 09:36:15 compute-0 podman[24811]: 2025-10-09 09:36:14.945361522 +0000 UTC m=+0.016099978 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 09:36:15 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:36:15.023Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct  9 09:36:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:36:15.023Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct  9 09:36:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:36:15.028Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.26.64 port=9094
Oct  9 09:36:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:36:15.030Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 22db3dbc-9f8b-4ce2-9c79-a55e39621bf8 (Updating alertmanager deployment (+1 -> 1))
Oct  9 09:36:15 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 22db3dbc-9f8b-4ce2-9c79-a55e39621bf8 (Updating alertmanager deployment (+1 -> 1)) in 3 seconds
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 091b0c8b-1ea1-4f90-b862-f4c0f89d406c (Updating grafana deployment (+1 -> 1))
Oct  9 09:36:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:36:15.059Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  9 09:36:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:36:15.060Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  9 09:36:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:36:15.063Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct  9 09:36:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:36:15.063Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct  9 09:36:15 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Oct  9 09:36:15 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  9 09:36:15 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Oct  9 09:36:15 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-0 ceph-mon[4497]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e8 new map
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-10-09T09:36:15:540254+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:14.585925+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:15 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Updating MDS map to version 8 from mon.0
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] up:active
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] up:standby
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 2 up:standby
Oct  9 09:36:15 compute-0 radosgw[23518]: v1 topic migration: starting v1 topic migration..
Oct  9 09:36:15 compute-0 radosgw[23518]: LDAP not started since no server URIs were provided in the configuration.
Oct  9 09:36:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-0-yciajn[23514]: 2025-10-09T09:36:15.597+0000 7f74780ff980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  9 09:36:15 compute-0 radosgw[23518]: v1 topic migration: finished v1 topic migration
Oct  9 09:36:15 compute-0 radosgw[23518]: framework: beast
Oct  9 09:36:15 compute-0 radosgw[23518]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  9 09:36:15 compute-0 radosgw[23518]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  9 09:36:15 compute-0 radosgw[23518]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-0 radosgw[23518]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-0 radosgw[23518]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-0 radosgw[23518]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-0 radosgw[23518]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-0 radosgw[23518]: starting handler: beast
Oct  9 09:36:15 compute-0 radosgw[23518]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-0 radosgw[23518]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-0 radosgw[23518]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:36:15 compute-0 radosgw[23518]: mgrc service_daemon_register rgw.14526 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.yciajn,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865152,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=773beadf-adcd-43ff-a482-a2d7a5b40bd8,zone_name=default,zonegroup_id=74fea7f9-d931-4447-a756-db2299521313,zonegroup_name=default}
Oct  9 09:36:15 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 9 completed events
Oct  9 09:36:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:36:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:16 compute-0 ceph-mon[4497]: Regenerating cephadm self-signed grafana TLS certificates
Oct  9 09:36:16 compute-0 ceph-mon[4497]: Deploying daemon grafana.compute-0 on compute-0
Oct  9 09:36:16 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v29: 43 pgs: 1 unknown, 1 creating+peering, 41 active+clean; 452 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e9 new map
Oct  9 09:36:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2025-10-09T09:36:16:832969+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:14.585925+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:16 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] up:standby
Oct  9 09:36:16 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 2 up:standby
Oct  9 09:36:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:36:17.031Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000746165s
Oct  9 09:36:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v30: 43 pgs: 43 active+clean; 456 KiB data, 485 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 5.7 KiB/s wr, 422 op/s
Oct  9 09:36:20 compute-0 podman[24925]: 2025-10-09 09:36:20.54775677 +0000 UTC m=+4.983003740 container create eabe282e545b3069d3fb64c36f0dcb3efc762c910f87a15a24fa283a318745fa (image=quay.io/ceph/grafana:10.4.0, name=quizzical_wu, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 systemd[1]: Started libpod-conmon-eabe282e545b3069d3fb64c36f0dcb3efc762c910f87a15a24fa283a318745fa.scope.
Oct  9 09:36:20 compute-0 podman[24925]: 2025-10-09 09:36:20.535837527 +0000 UTC m=+4.971084507 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 09:36:20 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:20 compute-0 podman[24925]: 2025-10-09 09:36:20.600414315 +0000 UTC m=+5.035661285 container init eabe282e545b3069d3fb64c36f0dcb3efc762c910f87a15a24fa283a318745fa (image=quay.io/ceph/grafana:10.4.0, name=quizzical_wu, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 podman[24925]: 2025-10-09 09:36:20.604894632 +0000 UTC m=+5.040141602 container start eabe282e545b3069d3fb64c36f0dcb3efc762c910f87a15a24fa283a318745fa (image=quay.io/ceph/grafana:10.4.0, name=quizzical_wu, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 podman[24925]: 2025-10-09 09:36:20.606050451 +0000 UTC m=+5.041297422 container attach eabe282e545b3069d3fb64c36f0dcb3efc762c910f87a15a24fa283a318745fa (image=quay.io/ceph/grafana:10.4.0, name=quizzical_wu, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 quizzical_wu[25142]: 472 0
Oct  9 09:36:20 compute-0 systemd[1]: libpod-eabe282e545b3069d3fb64c36f0dcb3efc762c910f87a15a24fa283a318745fa.scope: Deactivated successfully.
Oct  9 09:36:20 compute-0 podman[24925]: 2025-10-09 09:36:20.607482722 +0000 UTC m=+5.042729692 container died eabe282e545b3069d3fb64c36f0dcb3efc762c910f87a15a24fa283a318745fa (image=quay.io/ceph/grafana:10.4.0, name=quizzical_wu, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-71492c1995bbdbdaa155a9e3df2914a6a8caa4e905d54202ad9141da9cab2931-merged.mount: Deactivated successfully.
Oct  9 09:36:20 compute-0 podman[24925]: 2025-10-09 09:36:20.625888466 +0000 UTC m=+5.061135436 container remove eabe282e545b3069d3fb64c36f0dcb3efc762c910f87a15a24fa283a318745fa (image=quay.io/ceph/grafana:10.4.0, name=quizzical_wu, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 systemd[1]: libpod-conmon-eabe282e545b3069d3fb64c36f0dcb3efc762c910f87a15a24fa283a318745fa.scope: Deactivated successfully.
Oct  9 09:36:20 compute-0 podman[25157]: 2025-10-09 09:36:20.670185126 +0000 UTC m=+0.028574358 container create 94df343c93163c25921f75892a7eb6f03a6ea237158ad229ac180ca8561b6480 (image=quay.io/ceph/grafana:10.4.0, name=goofy_mestorf, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 systemd[1]: Started libpod-conmon-94df343c93163c25921f75892a7eb6f03a6ea237158ad229ac180ca8561b6480.scope.
Oct  9 09:36:20 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:20 compute-0 podman[25157]: 2025-10-09 09:36:20.710455956 +0000 UTC m=+0.068845197 container init 94df343c93163c25921f75892a7eb6f03a6ea237158ad229ac180ca8561b6480 (image=quay.io/ceph/grafana:10.4.0, name=goofy_mestorf, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 podman[25157]: 2025-10-09 09:36:20.715128316 +0000 UTC m=+0.073517547 container start 94df343c93163c25921f75892a7eb6f03a6ea237158ad229ac180ca8561b6480 (image=quay.io/ceph/grafana:10.4.0, name=goofy_mestorf, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 goofy_mestorf[25171]: 472 0
Oct  9 09:36:20 compute-0 systemd[1]: libpod-94df343c93163c25921f75892a7eb6f03a6ea237158ad229ac180ca8561b6480.scope: Deactivated successfully.
Oct  9 09:36:20 compute-0 podman[25157]: 2025-10-09 09:36:20.717656242 +0000 UTC m=+0.076045493 container attach 94df343c93163c25921f75892a7eb6f03a6ea237158ad229ac180ca8561b6480 (image=quay.io/ceph/grafana:10.4.0, name=goofy_mestorf, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 podman[25157]: 2025-10-09 09:36:20.717798791 +0000 UTC m=+0.076188022 container died 94df343c93163c25921f75892a7eb6f03a6ea237158ad229ac180ca8561b6480 (image=quay.io/ceph/grafana:10.4.0, name=goofy_mestorf, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc00c91757fc94a3f86eff5c57ffa434282a47ca56bce2346a184d2998b879d2-merged.mount: Deactivated successfully.
Oct  9 09:36:20 compute-0 podman[25157]: 2025-10-09 09:36:20.738830936 +0000 UTC m=+0.097220168 container remove 94df343c93163c25921f75892a7eb6f03a6ea237158ad229ac180ca8561b6480 (image=quay.io/ceph/grafana:10.4.0, name=goofy_mestorf, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:20 compute-0 podman[25157]: 2025-10-09 09:36:20.657962311 +0000 UTC m=+0.016351571 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 09:36:20 compute-0 systemd[1]: libpod-conmon-94df343c93163c25921f75892a7eb6f03a6ea237158ad229ac180ca8561b6480.scope: Deactivated successfully.
Oct  9 09:36:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v31: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 191 KiB/s rd, 4.7 KiB/s wr, 350 op/s
Oct  9 09:36:20 compute-0 systemd[1]: Reloading.
Oct  9 09:36:20 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:36:20 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:36:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:20 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 3b5d1ee0-908c-4495-8f2c-eb222cd9f922 (Global Recovery Event) in 10 seconds
Oct  9 09:36:20 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:36:20 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:36:20 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:36:20 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:36:20 compute-0 systemd[1]: Reloading.
Oct  9 09:36:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:21 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:21 compute-0 podman[25302]: 2025-10-09 09:36:21.336467598 +0000 UTC m=+0.030618301 container create 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256bc183ed1e9ebb8565c258ec613ce5f7bf4760464ea9bf1cfca84b22ee1758/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256bc183ed1e9ebb8565c258ec613ce5f7bf4760464ea9bf1cfca84b22ee1758/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256bc183ed1e9ebb8565c258ec613ce5f7bf4760464ea9bf1cfca84b22ee1758/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256bc183ed1e9ebb8565c258ec613ce5f7bf4760464ea9bf1cfca84b22ee1758/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256bc183ed1e9ebb8565c258ec613ce5f7bf4760464ea9bf1cfca84b22ee1758/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:21 compute-0 podman[25302]: 2025-10-09 09:36:21.376238005 +0000 UTC m=+0.070388718 container init 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:21 compute-0 podman[25302]: 2025-10-09 09:36:21.380549434 +0000 UTC m=+0.074700137 container start 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:21 compute-0 bash[25302]: 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5
Oct  9 09:36:21 compute-0 podman[25302]: 2025-10-09 09:36:21.323282478 +0000 UTC m=+0.017433201 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 09:36:21 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:21 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:21 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct  9 09:36:21 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:21 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 091b0c8b-1ea1-4f90-b862-f4c0f89d406c (Updating grafana deployment (+1 -> 1))
Oct  9 09:36:21 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 091b0c8b-1ea1-4f90-b862-f4c0f89d406c (Updating grafana deployment (+1 -> 1)) in 6 seconds
Oct  9 09:36:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Oct  9 09:36:21 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:21 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 484fa2be-f1f4-4539-8ed7-b9c81f8f1a26 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct  9 09:36:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Oct  9 09:36:21 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:21 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.kmcywb on compute-0
Oct  9 09:36:21 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.kmcywb on compute-0
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.510958698Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-09T09:36:21Z
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.51118282Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511194272Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511198279Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511201515Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511204451Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511207386Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511210332Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511214069Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511217325Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.51122046Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511223356Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511226282Z level=info msg=Target target=[all]
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511236581Z level=info msg="Path Home" path=/usr/share/grafana
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511239937Z level=info msg="Path Data" path=/var/lib/grafana
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511242794Z level=info msg="Path Logs" path=/var/log/grafana
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511245458Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511248264Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=settings t=2025-10-09T09:36:21.511251139Z level=info msg="App mode production"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=sqlstore t=2025-10-09T09:36:21.511459912Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=sqlstore t=2025-10-09T09:36:21.511476584Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.511894231Z level=info msg="Starting DB migrations"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.512905889Z level=info msg="Executing migration" id="create migration_log table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.513858154Z level=info msg="Migration successfully executed" id="create migration_log table" duration=951.944µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.514761437Z level=info msg="Executing migration" id="create user table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.515365015Z level=info msg="Migration successfully executed" id="create user table" duration=603.918µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.516048594Z level=info msg="Executing migration" id="add unique index user.login"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.516611075Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=562.169µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.517279354Z level=info msg="Executing migration" id="add unique index user.email"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.517815556Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=536.141µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.518479728Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.518991954Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=510.642µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.519862395Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.520387986Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=525.371µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.52098384Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.522885184Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.901094ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.523510854Z level=info msg="Executing migration" id="create user table v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.524079626Z level=info msg="Migration successfully executed" id="create user table v2" duration=568.593µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.524634924Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.525177055Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=542.242µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.525737542Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.526476816Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=738.533µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.527027155Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.527366183Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=334.651µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.527916892Z level=info msg="Executing migration" id="Drop old table user_v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.528383442Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=466.299µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.528941294Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.529803148Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=861.764µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.530457602Z level=info msg="Executing migration" id="Update user table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.530478411Z level=info msg="Migration successfully executed" id="Update user table charset" duration=21.351µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.531157161Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.531996593Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=839.984µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.532643844Z level=info msg="Executing migration" id="Add missing user data"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.532806872Z level=info msg="Migration successfully executed" id="Add missing user data" duration=163.329µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.533514266Z level=info msg="Executing migration" id="Add is_disabled column to user"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.534327329Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=812.722µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.534944963Z level=info msg="Executing migration" id="Add index user.login/user.email"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.535536178Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=591.024µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.536081556Z level=info msg="Executing migration" id="Add is_service_account column to user"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.53693207Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=850.294µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.53754202Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.54347671Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=5.933558ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.544072543Z level=info msg="Executing migration" id="Add uid column to user"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.544938796Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=866.044µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.545584443Z level=info msg="Executing migration" id="Update uid column values for users"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.545737462Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=152.949µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.546369213Z level=info msg="Executing migration" id="Add unique index user_uid"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.546904562Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=535.008µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.547517448Z level=info msg="Executing migration" id="create temp user table v1-7"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.548064149Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=551.249µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.548745593Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.549303365Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=557.421µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.549922593Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.550472449Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=545.158µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.551073934Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.551621115Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=546.881µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.552240844Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.552770562Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=529.428µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.553369802Z level=info msg="Executing migration" id="Update temp_user table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.553387436Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=18.256µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.55402645Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.554573071Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=546.331µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.555164296Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.555727188Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=562.77µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.556450981Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.556977555Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=526.442µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.557583587Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.558096836Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=512.336µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.558692098Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.561450869Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.75821ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.562066781Z level=info msg="Executing migration" id="create temp_user v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.562689084Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=622.162µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.563267304Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.563817912Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=549.316µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.564398006Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.56494078Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=542.603µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.565502299Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.566034401Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=531.843µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.566606621Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.567133014Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=525.22µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.567805341Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.56809624Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=290.718µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.568666073Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.569078892Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=412.559µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.569657583Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.570093414Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=435.881µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.570798655Z level=info msg="Executing migration" id="create star table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.571293277Z level=info msg="Migration successfully executed" id="create star table" duration=494.843µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.571868702Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.572473923Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=604.88µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.573152843Z level=info msg="Executing migration" id="create org table v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.573691579Z level=info msg="Migration successfully executed" id="create org table v1" duration=554.926µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.574253789Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.574832661Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=578.641µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.575397094Z level=info msg="Executing migration" id="create org_user table v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.575897729Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=500.364µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.57646042Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.577029522Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=568.832µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.577603765Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.578236899Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=632.814µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.578793809Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.579374073Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=580.214µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.579922488Z level=info msg="Executing migration" id="Update org table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.579937035Z level=info msg="Migration successfully executed" id="Update org table charset" duration=15.118µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.580577803Z level=info msg="Executing migration" id="Update org_user table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.580592541Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=15.099µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.581203051Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.581327146Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=123.933µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.581943728Z level=info msg="Executing migration" id="create dashboard table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.582562234Z level=info msg="Migration successfully executed" id="create dashboard table" duration=605.852µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.583102053Z level=info msg="Executing migration" id="add index dashboard.account_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.583725718Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=623.756µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.584460664Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.585298383Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=837.288µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.585903214Z level=info msg="Executing migration" id="create dashboard_tag table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.586456056Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=552.532µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.587203675Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.587946125Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=742.17µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.58860083Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.589135027Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=534.056µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.589708518Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.593628289Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=3.9195ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.594234933Z level=info msg="Executing migration" id="create dashboard v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.594932699Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=697.516µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.595531948Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.596084991Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=551.65µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.596714348Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.597313948Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=599.35µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.597903951Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.598208936Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=304.775µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.598755917Z level=info msg="Executing migration" id="drop table dashboard_v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.599602213Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=846.056µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.600318023Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.60036407Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=46.327µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.601025657Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.602303987Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.277469ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.602871136Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.604059598Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.18817ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.604650331Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.605829204Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.178642ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.606422793Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.607001494Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=578.41µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.607609611Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.608825664Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.215803ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.609426717Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.610024994Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=597.988µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.610625738Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.611204109Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=578.189µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.611780155Z level=info msg="Executing migration" id="Update dashboard table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.611796916Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=17.352µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.61241462Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.612429618Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=15.919µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.61309355Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.614510051Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.41629ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.615098661Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.616410905Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.311884ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.617023801Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.618352496Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.328545ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.618941476Z level=info msg="Executing migration" id="Add column uid in dashboard"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.620263309Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.321632ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.620875102Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.621030315Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=155.053µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.621720386Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.622299207Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=578.54µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.62286798Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.623471648Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=603.327µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.624027737Z level=info msg="Executing migration" id="Update dashboard title length"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.624042735Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=16.431µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.624701327Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.625294304Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=591.105µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.625862165Z level=info msg="Executing migration" id="create dashboard_provisioning"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.626381274Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=518.778µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.62697833Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.630538423Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.560532ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.631154925Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.63169324Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=553.955µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.632261952Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.63284397Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=581.767µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.633412061Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.633995862Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=583.479µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.634582107Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.634830084Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=247.717µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.635375012Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.63583548Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=459.396µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.636449938Z level=info msg="Executing migration" id="Add check_sum column"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.637793091Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.343062ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.638340554Z level=info msg="Executing migration" id="Add index for dashboard_title"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.638910738Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=569.695µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.639483448Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.639610187Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=126.778µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.640281793Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.640408482Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=126.738µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.641031046Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.641625416Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=593.93µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.642196424Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.643769399Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.572725ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.644389158Z level=info msg="Executing migration" id="create data_source table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.645060584Z level=info msg="Migration successfully executed" id="create data_source table" duration=671.466µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.645688828Z level=info msg="Executing migration" id="add index data_source.account_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.646276496Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=587.498µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.646878481Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.647478944Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=600.202µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.648038178Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.648790376Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=750.724µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.649491178Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.650050312Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=559.034µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.650647548Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.654413178Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=3.76565ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.655030191Z level=info msg="Executing migration" id="create data_source table v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.655700715Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=668.821µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.656263326Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.656880951Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=617.434µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.657431298Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.658041339Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=609.84µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.658716682Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.659157493Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=440.51µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.659759629Z level=info msg="Executing migration" id="Add column with_credentials"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.661341871Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.582112ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.66195654Z level=info msg="Executing migration" id="Add secure json data column"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.663545787Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.589106ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.664176727Z level=info msg="Executing migration" id="Update data_source table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.664192116Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=16.1µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.664842892Z level=info msg="Executing migration" id="Update initial version to 1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.664986724Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=143.771µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.665664652Z level=info msg="Executing migration" id="Add read_only data column"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.667229963Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.565182ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.668211404Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.668379511Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=169.119µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.669133503Z level=info msg="Executing migration" id="Update json_data with nulls"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.669295479Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=160.944µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.669950403Z level=info msg="Executing migration" id="Add uid column"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.671567291Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.616678ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.672173515Z level=info msg="Executing migration" id="Update uid value"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.672326443Z level=info msg="Migration successfully executed" id="Update uid value" duration=152.937µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.672978102Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.673607438Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=629.057µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.67420264Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.674777144Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=574.042µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.675363139Z level=info msg="Executing migration" id="create api_key table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.675963701Z level=info msg="Migration successfully executed" id="create api_key table" duration=600.361µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.676699098Z level=info msg="Executing migration" id="add index api_key.account_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.67733706Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=638.052µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.677965796Z level=info msg="Executing migration" id="add index api_key.key"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.678554265Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=588.118µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.679524995Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.680128423Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=604.81µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.681094094Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.681684627Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=589.531µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.685917848Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.686545363Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=626.753µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.687241945Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.687830114Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=586.846µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.688454351Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.692618493Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.163709ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.6932216Z level=info msg="Executing migration" id="create api_key table v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.693795192Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=573.541µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.694407917Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.695000414Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=591.104µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.695607638Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.696193573Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=585.744µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.696774779Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.69736313Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=588.03µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.698070663Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.698381038Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=309.984µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.698936876Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.699388508Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=451.511µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.69995745Z level=info msg="Executing migration" id="Update api_key table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.699975004Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=17.804µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.700641611Z level=info msg="Executing migration" id="Add expires to api_key table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.702358828Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.716897ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.702936647Z level=info msg="Executing migration" id="Add service account foreign key"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.704620453Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.683505ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.705248857Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.705371309Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=121.23µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.706001296Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.707729454Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.727888ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.708343963Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.710027357Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.683303ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.710667575Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.711262015Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=594.21µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.71183212Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.71227741Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=444.84µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.712845861Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.713481921Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=635.719µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.714025606Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.714676844Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=650.486µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.71541245Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.716039242Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=626.572µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.716785449Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.717397233Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=611.503µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.718050335Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.71810657Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=56.516µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.718759822Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.718778177Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=18.945µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.719286995Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.72103916Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=1.752064ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.72170777Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.723487185Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.779225ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.724200531Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.724243792Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=44.323µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.724862218Z level=info msg="Executing migration" id="create quota table v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.725403188Z level=info msg="Migration successfully executed" id="create quota table v1" duration=540.86µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.726061069Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.726687469Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=626.089µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.727367732Z level=info msg="Executing migration" id="Update quota table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.727387179Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=20.039µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.727997229Z level=info msg="Executing migration" id="create plugin_setting table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.728600456Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=602.977µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.729288012Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.729903253Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=615.551µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.73058564Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.732465603Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.879774ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.733068381Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.73308403Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=15.94µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.733716222Z level=info msg="Executing migration" id="create session table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.734366177Z level=info msg="Migration successfully executed" id="create session table" duration=649.515µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.735074724Z level=info msg="Executing migration" id="Drop old table playlist table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.735163271Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=87.926µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.735770405Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.735836631Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=67.247µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.73640459Z level=info msg="Executing migration" id="create playlist table v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.736936924Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=532.023µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.737641833Z level=info msg="Executing migration" id="create playlist item table v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.738196359Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=554.235µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.738950261Z level=info msg="Executing migration" id="Update playlist table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.738967163Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=17.123µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.739610235Z level=info msg="Executing migration" id="Update playlist_item table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.739630032Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=20.269µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.74022795Z level=info msg="Executing migration" id="Add playlist column created_at"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.742130287Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.902007ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.742823434Z level=info msg="Executing migration" id="Add playlist column updated_at"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.744789742Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.966097ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.745370867Z level=info msg="Executing migration" id="drop preferences table v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.745445717Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=75.14µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.746022925Z level=info msg="Executing migration" id="drop preferences table v3"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.746084713Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=62.037µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.746689002Z level=info msg="Executing migration" id="create preferences table v3"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.747280968Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=591.636µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.747946873Z level=info msg="Executing migration" id="Update preferences table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.747964167Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=17.725µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.748632416Z level=info msg="Executing migration" id="Add column team_id in preferences"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.750699694Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.066987ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.751250252Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.751364638Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=114.476µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.751956534Z level=info msg="Executing migration" id="Add column week_start in preferences"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.753999125Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.04245ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.754653248Z level=info msg="Executing migration" id="Add column preferences.json_data"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.756635847Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.981567ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.757248441Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.757292575Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=43.622µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.758119604Z level=info msg="Executing migration" id="Add preferences index org_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.759027186Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=907.151µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.75992077Z level=info msg="Executing migration" id="Add preferences index user_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.760717762Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=796.933µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.761509906Z level=info msg="Executing migration" id="create alert table v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.762483633Z level=info msg="Migration successfully executed" id="create alert table v1" duration=973.586µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.763480703Z level=info msg="Executing migration" id="add index alert org_id & id "
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.764340834Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=860.974µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.766058593Z level=info msg="Executing migration" id="add index alert state"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.766815471Z level=info msg="Migration successfully executed" id="add index alert state" duration=756.657µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.768757432Z level=info msg="Executing migration" id="add index alert dashboard_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.769582537Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=825.044µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.77053858Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.771369036Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=830.316µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.772414246Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.773951245Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.478287ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.775125308Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.77627691Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.151742ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.777107826Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.784708087Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.599559ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.785628592Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.78629066Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=663.001µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.786997582Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.787748929Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=752.579µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.788707897Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.789136536Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=428.379µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.789927637Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.790613971Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=686.133µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.791515531Z level=info msg="Executing migration" id="create alert_notification table v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.792357018Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=840.345µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.793297331Z level=info msg="Executing migration" id="Add column is_default"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.795866486Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.568994ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.796676121Z level=info msg="Executing migration" id="Add column frequency"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.799334093Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.657491ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.800088997Z level=info msg="Executing migration" id="Add column send_reminder"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.80279022Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.700862ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.803544843Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.805927515Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.383545ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.806749064Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.807483518Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=734.213µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.808384157Z level=info msg="Executing migration" id="Update alert table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.808510585Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=127.791µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.809268875Z level=info msg="Executing migration" id="Update alert_notification table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.809391667Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=123.263µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.810245316Z level=info msg="Executing migration" id="create notification_journal table v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.810923023Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=677.177µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.812047635Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.812807467Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=759.562µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.813760885Z level=info msg="Executing migration" id="drop alert_notification_journal"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.8145065Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=745.504µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.815326767Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.816025113Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=698.277µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.816743799Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.817506106Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=761.956µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.818241803Z level=info msg="Executing migration" id="Add for to alert table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.820687425Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.445491ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.821516367Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.823950778Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.433229ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.824694018Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.824927209Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=233.32µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.825708632Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.826518238Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=809.265µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.82759034Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.828313132Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=722.663µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.82902269Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.831530518Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.508149ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.832202646Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.832344985Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=142.68µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.833129173Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.833879358Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=749.814µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.834696688Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.835822832Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.125744ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.836714242Z level=info msg="Executing migration" id="Drop old annotation table v4"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.836811015Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=95.74µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.837647512Z level=info msg="Executing migration" id="create annotation table v5"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.838514507Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=867.786µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.839378035Z level=info msg="Executing migration" id="add index annotation 0 v3"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.840182081Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=803.545µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.841006556Z level=info msg="Executing migration" id="add index annotation 1 v3"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.8416929Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=685.933µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.842402497Z level=info msg="Executing migration" id="add index annotation 2 v3"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.843042074Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=638.164µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.843858213Z level=info msg="Executing migration" id="add index annotation 3 v3"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.844689189Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=830.486µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.84537938Z level=info msg="Executing migration" id="add index annotation 4 v3"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.846081485Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=702.736µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.846815989Z level=info msg="Executing migration" id="Update annotation table charset"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.846834555Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=19.758µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.847630125Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.850638466Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.007901ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.85140438Z level=info msg="Executing migration" id="Drop category_id index"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.852071839Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=667.178µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.852716886Z level=info msg="Executing migration" id="Add column tags to annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.855225486Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.507949ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.855836918Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.856356047Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=518.808µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.85696259Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.85762489Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=661.999µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.858333966Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.858964986Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=629.788µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.859589554Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.866523487Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=6.933653ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.867129531Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.867692892Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=563.262µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.868445311Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.869055462Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=604.311µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.86985036Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.870176365Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=325.885µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.870799941Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.871247804Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=447.794µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.871873174Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.872002197Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=128.862µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.872628829Z level=info msg="Executing migration" id="Add created time to annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.875178035Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=2.549026ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.875836828Z level=info msg="Executing migration" id="Add updated time to annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.878362399Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.525381ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.878976137Z level=info msg="Executing migration" id="Add index for created in annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.879645679Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=669.251µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.880239238Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.880852865Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=613.687µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.881517848Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.8816989Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=177.525µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.882309922Z level=info msg="Executing migration" id="Add epoch_end column"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.884859349Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=2.549086ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.885458318Z level=info msg="Executing migration" id="Add index for epoch_end"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.886076113Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=618.646µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.886775111Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.886903694Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=128.632µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.887531337Z level=info msg="Executing migration" id="Move region to single row"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.887787069Z level=info msg="Migration successfully executed" id="Move region to single row" duration=255.632µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.888394986Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.889027888Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=632.632µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.889695427Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.890347327Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=651.889µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.890972205Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.891647888Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=675.153µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.892284709Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.89292667Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=641.57µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.893491104Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.894086036Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=593.61µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.894685185Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.895303481Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=618.085µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.895852807Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.895899565Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=47.08µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.896769536Z level=info msg="Executing migration" id="create test_data table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.89740314Z level=info msg="Migration successfully executed" id="create test_data table" duration=633.834µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.898174665Z level=info msg="Executing migration" id="create dashboard_version table v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.898758746Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=583.931µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.899475528Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.900077453Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=602.517µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.90077614Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.901471131Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=694.661µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.902161923Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.902296196Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=134.513µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.902913731Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.90321557Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=301.629µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.903748935Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.903792918Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=44.404µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.904405052Z level=info msg="Executing migration" id="create team table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.904945692Z level=info msg="Migration successfully executed" id="create team table" duration=540.72µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.905641824Z level=info msg="Executing migration" id="add index team.org_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.906359377Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=717.071µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.907056853Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.907715794Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=658.63µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.908380858Z level=info msg="Executing migration" id="Add column uid in team"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.911096529Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=2.715652ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.911725284Z level=info msg="Executing migration" id="Update uid column values in team"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.91185523Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=129.935µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.912473325Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.913124293Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=650.697µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.913845292Z level=info msg="Executing migration" id="create team member table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.914413753Z level=info msg="Migration successfully executed" id="create team member table" duration=568.402µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.915247195Z level=info msg="Executing migration" id="add index team_member.org_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.915872333Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=624.667µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.916713479Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.917362844Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=649.084µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.918184152Z level=info msg="Executing migration" id="add index team_member.team_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.918806355Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=622.003µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.919755024Z level=info msg="Executing migration" id="Add column email to team table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.922739111Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=2.981672ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.923417019Z level=info msg="Executing migration" id="Add column external to team_member table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.926513217Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.095766ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.9271724Z level=info msg="Executing migration" id="Add column permission to team_member table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.929982468Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=2.809487ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.93068912Z level=info msg="Executing migration" id="create dashboard acl table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.93139433Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=704.409µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.93212643Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.932859913Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=733.284µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.933692614Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.934461814Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=768.72µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.935379975Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.936063854Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=683.469µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.936680457Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.937325363Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=645.336µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.937900768Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.938589908Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=689.792µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.939173558Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.93983808Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=664.272µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.940459372Z level=info msg="Executing migration" id="add index dashboard_permission"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.941123594Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=663.861µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.941709679Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.942106687Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=396.878µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.942723911Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.942885647Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=161.385µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.943521887Z level=info msg="Executing migration" id="create tag table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.944065511Z level=info msg="Migration successfully executed" id="create tag table" duration=543.264µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.944735504Z level=info msg="Executing migration" id="add index tag.key_value"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.945395368Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=659.533µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.946034354Z level=info msg="Executing migration" id="create login attempt table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.946595021Z level=info msg="Migration successfully executed" id="create login attempt table" duration=560.497µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.947300852Z level=info msg="Executing migration" id="add index login_attempt.username"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.947943223Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=642.312µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.948536832Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.949182279Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=645.366µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.949768625Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.958877168Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=9.107822ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.959536602Z level=info msg="Executing migration" id="create login_attempt v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.960070529Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=533.866µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.960673195Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.961324683Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=652.41µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.961918292Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.96217708Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=258.678µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.962722017Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.96320604Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=483.812µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.963772137Z level=info msg="Executing migration" id="create user auth table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.964312957Z level=info msg="Migration successfully executed" id="create user auth table" duration=540.579µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.964905985Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.965584976Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=678.539µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.966165029Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.966212008Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=46.179µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.966863216Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.970193906Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.330461ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.970829514Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.973926504Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.096879ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.974536824Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.977628073Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.091228ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.978224929Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.981385767Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.160578ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.981953358Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.982639711Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=686.174µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.983252497Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.986416614Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.163976ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.987047653Z level=info msg="Executing migration" id="create server_lock table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.987672301Z level=info msg="Migration successfully executed" id="create server_lock table" duration=624.449µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.9883293Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.988985676Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=656.117µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.989590348Z level=info msg="Executing migration" id="create user auth token table"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.990209544Z level=info msg="Migration successfully executed" id="create user auth token table" duration=619.999µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.990789478Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.991466605Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=676.706µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.992012985Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.992691103Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=678.149µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.993296444Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.994023837Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=727.202µs
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.994665567Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.998045069Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.379332ms
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.998664246Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Oct  9 09:36:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.999345461Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=681.054µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:21.999986499Z level=info msg="Executing migration" id="create cache_data table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.000653988Z level=info msg="Migration successfully executed" id="create cache_data table" duration=667.448µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.001269307Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.001934552Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=665.816µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.002591501Z level=info msg="Executing migration" id="create short_url table v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.003278736Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=687.135µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.003931949Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.004647166Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=714.788µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.005249052Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.00529604Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=46.327µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.00592703Z level=info msg="Executing migration" id="delete alert_definition table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.005992032Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=64.2µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.006636157Z level=info msg="Executing migration" id="recreate alert_definition table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.00727923Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=643.404µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.007839836Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.00857393Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=733.724µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.009130921Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.009863723Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=733.051µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.010478191Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.010523235Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=45.375µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.011160287Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.011817256Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=657.039µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.012425052Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.013084785Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=659.483µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.013677123Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.01438117Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=703.867µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.01494363Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.015674459Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=730.458µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.016265873Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.019851023Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=3.584729ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.020442939Z level=info msg="Executing migration" id="drop alert_definition table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.021192853Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=748.883µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.021772365Z level=info msg="Executing migration" id="delete alert_definition_version table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.021839272Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=67.096µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.022531497Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.023180371Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=648.934µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.023793436Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.024522141Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=728.494µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.025076997Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.025797385Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=719.025µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.026391825Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.026450186Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=58.501µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.027073581Z level=info msg="Executing migration" id="drop alert_definition_version table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.027836128Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=762.358µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.028445899Z level=info msg="Executing migration" id="create alert_instance table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.029113568Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=667.339µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.029700134Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.030463133Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=762.848µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.031013459Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.03173515Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=722.633µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.032303472Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.035948465Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=3.644973ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.036558044Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.037230201Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=672.046µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.037811537Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.038524772Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=713.014µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.039071693Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.058268719Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=19.196605ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.059041586Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.076425252Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=17.382314ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.077247323Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.077937554Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=689.811µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.078540382Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.079217317Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=676.826µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.079790478Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.083251153Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=3.460346ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.083862475Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.087301971Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.439295ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.087879831Z level=info msg="Executing migration" id="create alert_rule table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.088601571Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=721.63µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.089191573Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.089901483Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=709.718µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.09047872Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.091197055Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=716.611µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.091766147Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.092549544Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=782.605µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.093133025Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.093197547Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=64.852µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.093831742Z level=info msg="Executing migration" id="add column for to alert_rule"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.097558128Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=3.726066ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.098112734Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.101684248Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=3.571233ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.102298607Z level=info msg="Executing migration" id="add column labels to alert_rule"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.10589637Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=3.597573ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.106537209Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.107232119Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=694.66µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.107886933Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.108625094Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=738.171µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.109259561Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.11280683Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=3.546657ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.113466102Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.116998172Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=3.530617ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.117630054Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.118347255Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=716.822µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.11891728Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.12252357Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=3.60627ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.123116288Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.12671351Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=3.596981ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.12735495Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.127402208Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=47.619µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.128048136Z level=info msg="Executing migration" id="create alert_rule_version table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.128885506Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=837.269µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.12950882Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.130265687Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=756.487µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.13126356Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.132083074Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=819.224µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.132719264Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.132764559Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=45.876µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.133407432Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.137250217Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=3.842374ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.137850789Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.141700728Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=3.849428ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.142279419Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.146002529Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=3.723041ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.146648948Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.150462087Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=3.812999ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.151047381Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.154760973Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=3.713783ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.155354743Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.155399727Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=45.176µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.156044563Z level=info msg="Executing migration" id=create_alert_configuration_table
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.156627662Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=582.778µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.15742726Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.161268644Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=3.841293ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.16186589Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.161912577Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=47.75µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.16261388Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.166670649Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.055386ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.167265209Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.167968997Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=703.476µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.168619042Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.172581282Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=3.96171ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.173198267Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.173778591Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=580.564µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.174352853Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.175046672Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=693.537µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.175645841Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.179526087Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=3.879564ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.18011599Z level=info msg="Executing migration" id="create provenance_type table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.180697767Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=581.657µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.181292307Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.181989892Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=697.424µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.182594012Z level=info msg="Executing migration" id="create alert_image table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.183207779Z level=info msg="Migration successfully executed" id="create alert_image table" duration=613.628µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.183782774Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.184498973Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=716.07µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.185046786Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.185091561Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=46.197µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.185736257Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.186418352Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=681.645µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.18701623Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.187745114Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=729.936µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.188371194Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.18863963Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.189250902Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.189609429Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=358.296µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.190200714Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.190906475Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=705.571µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.191482561Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.195526496Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=4.061187ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.196165229Z level=info msg="Executing migration" id="create library_element table v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.196958516Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=793.186µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.197634881Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.198391528Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=756.306µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.198982041Z level=info msg="Executing migration" id="create library_element_connection table v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.199630062Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=648.403µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.200267234Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.201004895Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=737.24µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.201631045Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.202347467Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=715.971µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.202941116Z level=info msg="Executing migration" id="increase max description length to 2048"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.202960322Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=19.857µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.203618032Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.203663387Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=45.706µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.204275321Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.204497711Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=222.981µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.205107019Z level=info msg="Executing migration" id="create data_keys table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.205832738Z level=info msg="Migration successfully executed" id="create data_keys table" duration=725.449µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.206515946Z level=info msg="Executing migration" id="create secrets table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.207103333Z level=info msg="Migration successfully executed" id="create secrets table" duration=587.307µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.20770646Z level=info msg="Executing migration" id="rename data_keys name column to id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.230392464Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=22.6842ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.231104688Z level=info msg="Executing migration" id="add name column into data_keys"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.235529982Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=4.425134ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.236115576Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.236249979Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=134.594µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.236867514Z level=info msg="Executing migration" id="rename data_keys name column to label"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.259567234Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=22.6994ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.260383562Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.28323581Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=22.851757ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.28394643Z level=info msg="Executing migration" id="create kv_store table v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.284627694Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=681.165µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.285269835Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.2860202Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=749.734µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.28665167Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.286803537Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=151.847µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.287445637Z level=info msg="Executing migration" id="create permission table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.288111272Z level=info msg="Migration successfully executed" id="create permission table" duration=665.275µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.288741751Z level=info msg="Executing migration" id="add unique index permission.role_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.289460045Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=718.174µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.290017005Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.290774634Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=757.448µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.291560386Z level=info msg="Executing migration" id="create role table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.292208918Z level=info msg="Migration successfully executed" id="create role table" duration=648.332µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.292858443Z level=info msg="Executing migration" id="add column display_name"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.297623196Z level=info msg="Migration successfully executed" id="add column display_name" duration=4.764524ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.298279144Z level=info msg="Executing migration" id="add column group_name"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.302729955Z level=info msg="Migration successfully executed" id="add column group_name" duration=4.450441ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.303362198Z level=info msg="Executing migration" id="add index role.org_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.304097013Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=734.765µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.304701994Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.305491132Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=788.757µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.306057139Z level=info msg="Executing migration" id="add index role_org_id_uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.306846567Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=789.248µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.307673697Z level=info msg="Executing migration" id="create team role table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.308378025Z level=info msg="Migration successfully executed" id="create team role table" duration=708.505µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.309015417Z level=info msg="Executing migration" id="add index team_role.org_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.309794726Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=778.898µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.310446325Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.311257935Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=802.213µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.311869558Z level=info msg="Executing migration" id="add index team_role.team_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.312599014Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=729.145µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.313173887Z level=info msg="Executing migration" id="create user role table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.313788616Z level=info msg="Migration successfully executed" id="create user role table" duration=614.578µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.31437423Z level=info msg="Executing migration" id="add index user_role.org_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.315101822Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=727.471µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.315704979Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.316452629Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=747.15µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.317009059Z level=info msg="Executing migration" id="add index user_role.user_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.3177418Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=732.321µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.318324208Z level=info msg="Executing migration" id="create builtin role table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.318937445Z level=info msg="Migration successfully executed" id="create builtin role table" duration=613.067µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.319531445Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.320272632Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=740.937µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.320834582Z level=info msg="Executing migration" id="add index builtin_role.name"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.321588933Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=753.841µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.32217549Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.327179124Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.003324ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.327795215Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.328544479Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=749.032µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.329115825Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.329882471Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=766.185µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.330445944Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.331175468Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=729.175µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.331742669Z level=info msg="Executing migration" id="add unique index role.uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.33248076Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=737.812µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.333031308Z level=info msg="Executing migration" id="create seed assignment table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.333600881Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=569.585µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.334216503Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.334963501Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=746.878µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.335628124Z level=info msg="Executing migration" id="add column hidden to role table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.340631247Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=5.004016ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.341232992Z level=info msg="Executing migration" id="permission kind migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.346157457Z level=info msg="Migration successfully executed" id="permission kind migration" duration=4.922261ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.34675331Z level=info msg="Executing migration" id="permission attribute migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.351550546Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=4.796714ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.352161838Z level=info msg="Executing migration" id="permission identifier migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.356927083Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=4.764825ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.357562901Z level=info msg="Executing migration" id="add permission identifier index"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.358425218Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=862.075µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.359062649Z level=info msg="Executing migration" id="add permission action scope role_id index"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.359883468Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=819.606µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.360471025Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.361234074Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=737.13µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.361838163Z level=info msg="Executing migration" id="create query_history table v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.362528865Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=689.46µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.363136971Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.36392095Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=783.699µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.364581234Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.364627953Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=49.624µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.365277628Z level=info msg="Executing migration" id="rbac disabled migrator"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.365303937Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=26.76µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.365944726Z level=info msg="Executing migration" id="teams permissions migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.366279587Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=334.62µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.3668913Z level=info msg="Executing migration" id="dashboard permissions"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.367274503Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=385.356µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.367918046Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.368405525Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=487.3µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.369033911Z level=info msg="Executing migration" id="drop managed folder create actions"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.369221064Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=170.651µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.369853776Z level=info msg="Executing migration" id="alerting notification permissions"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.370238622Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=384.595µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.370886994Z level=info msg="Executing migration" id="create query_history_star table v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.371480092Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=592.937µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.372041631Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.372806754Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=765.043µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.373524817Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.378398206Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=4.876264ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.379002084Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.379049124Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=47.39µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.379760064Z level=info msg="Executing migration" id="create correlation table v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.380535015Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=774.57µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.381153652Z level=info msg="Executing migration" id="add index correlations.uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.381896061Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=742.309µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.382481826Z level=info msg="Executing migration" id="add index correlations.source_uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.383223955Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=742.039µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.383801173Z level=info msg="Executing migration" id="add correlation config column"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.388952106Z level=info msg="Migration successfully executed" id="add correlation config column" duration=5.150481ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.389596951Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.390380499Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=783.278µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.390956104Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.391732388Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=775.863µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.392339403Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.406792663Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=14.452629ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.407493605Z level=info msg="Executing migration" id="create correlation v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.408314884Z level=info msg="Migration successfully executed" id="create correlation v2" duration=820.717µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.408925595Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.409687621Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=761.776µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.410285059Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.411076911Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=791.703µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.41173367Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.412519622Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=785.471µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.413104624Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.413304502Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=199.647µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.413925442Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.414581479Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=655.577µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.415158296Z level=info msg="Executing migration" id="add provisioning column"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.420268411Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.109453ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.420875266Z level=info msg="Executing migration" id="create entity_events table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.42149806Z level=info msg="Migration successfully executed" id="create entity_events table" duration=621.691µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.42227208Z level=info msg="Executing migration" id="create dashboard public config v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.423063681Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=791.513µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.423842831Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.424128168Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.424924882Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.425216752Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct  9 09:36:22 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.425835889Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Oct  9 09:36:22 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-0 ceph-mon[4497]: Deploying daemon haproxy.rgw.default.compute-0.kmcywb on compute-0
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.426496886Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=660.475µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.427118378Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.42829189Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.173162ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.428900948Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.429675909Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=775.152µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.43048717Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.431271428Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=782.806µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.431863985Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.43298545Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.120784ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.433697943Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.434509422Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=811.149µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.435089246Z level=info msg="Executing migration" id="Drop public config table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.435779858Z level=info msg="Migration successfully executed" id="Drop public config table" duration=690.223µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.4364599Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.437282681Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=822.451µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.437863306Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.438626566Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=763.039µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.439201289Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.439979416Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=777.726µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.440618352Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.441382382Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=763.94µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.441953849Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.460068985Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=18.108703ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.460994921Z level=info msg="Executing migration" id="add annotations_enabled column"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.466612352Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=5.615888ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.467315467Z level=info msg="Executing migration" id="add time_selection_enabled column"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.472595203Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=5.278483ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.473253975Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.473421241Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=167.326µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.474080153Z level=info msg="Executing migration" id="add share column"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.479177443Z level=info msg="Migration successfully executed" id="add share column" duration=5.096469ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.479831547Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.479989725Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=157.847µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.480666912Z level=info msg="Executing migration" id="create file table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.481363395Z level=info msg="Migration successfully executed" id="create file table" duration=697.584µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.482296615Z level=info msg="Executing migration" id="file table idx: path natural pk"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.483077867Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=781.333µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.483679442Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.484453803Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=774.33µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.485025851Z level=info msg="Executing migration" id="create file_meta table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.485624189Z level=info msg="Migration successfully executed" id="create file_meta table" duration=596.575µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.486304893Z level=info msg="Executing migration" id="file table idx: path key"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.487085254Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=779.409µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.487705574Z level=info msg="Executing migration" id="set path collation in file table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.487754846Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=49.703µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.4897143Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.489795243Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=85.462µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.49163944Z level=info msg="Executing migration" id="managed permissions migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.49219605Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=556.519µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.493016377Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.49324627Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=229.493µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.493997316Z level=info msg="Executing migration" id="RBAC action name migrator"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.495119352Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.121694ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.495895315Z level=info msg="Executing migration" id="Add UID column to playlist"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.501369066Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=5.473509ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.501978765Z level=info msg="Executing migration" id="Update uid column values in playlist"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.502100174Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=121.278µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.50278807Z level=info msg="Executing migration" id="Add index for uid in playlist"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.50365783Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=869.3µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.504275926Z level=info msg="Executing migration" id="update group index for alert rules"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.504555453Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=279.808µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.505172266Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.505330846Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=158.36µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.506009625Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.506364895Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=355.09µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.506964816Z level=info msg="Executing migration" id="add action column to seed_assignment"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.512207Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=5.241864ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.512835566Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.518092226Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=5.257462ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.518860324Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.519881109Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.020414ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.520589565Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.583512565Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=62.920796ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.584356105Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.585498039Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.141642ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.586302496Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.587332006Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.02947ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.587980058Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.605988884Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=18.008094ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.606895323Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.612265337Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=5.369743ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.612908981Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.613126051Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=216.929µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.613779192Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.613910119Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=131.007µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.614607544Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.614762445Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=154.761µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.615372647Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.615542247Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=168.628µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.61611666Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.616285668Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=168.999µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.616931666Z level=info msg="Executing migration" id="create folder table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.617615876Z level=info msg="Migration successfully executed" id="create folder table" duration=686.704µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.618181292Z level=info msg="Executing migration" id="Add index for parent_uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.619100286Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=918.712µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.619815664Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.620658283Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=841.988µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.621234109Z level=info msg="Executing migration" id="Update folder title length"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.621250559Z level=info msg="Migration successfully executed" id="Update folder title length" duration=16.872µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.621867843Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.622681768Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=813.595µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.623290485Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.624042263Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=749.383µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.624635131Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.625508347Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=872.815µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.626066129Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.626461044Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=394.714µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.627024216Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.627242006Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=217.549µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.627895207Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.62865986Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=765.844µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.62928091Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.630064068Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=782.977µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.63068109Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.631484535Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=803.204µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.632032859Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.632851894Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=818.714µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.633452256Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.634241714Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=788.486µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.634799837Z level=info msg="Executing migration" id="create anon_device table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.635430385Z level=info msg="Migration successfully executed" id="create anon_device table" duration=630.248µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.63605874Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.636910867Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=851.957µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.63757551Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.638337086Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=761.425µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.638895849Z level=info msg="Executing migration" id="create signing_key table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.639607451Z level=info msg="Migration successfully executed" id="create signing_key table" duration=711.453µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.640249812Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.641009585Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=758.371µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.641588425Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.642337739Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=749.553µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.642899377Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.643097902Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=198.895µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.643704386Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.649106751Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=5.403629ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.649728413Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.650295854Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=567.77µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.650900604Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.651695031Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=794.097µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.652297368Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.653061208Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=763.711µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.65377321Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.654587226Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=813.833µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.655152531Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.656350209Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.197318ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.656924933Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.657721484Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=796.171µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.658339811Z level=info msg="Executing migration" id="create sso_setting table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.659070318Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=730.368µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.659682152Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.660266724Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=584.862µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.660810198Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.661003523Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=193.636µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.661634583Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.66167597Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=41.618µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.662308303Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.667995135Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=5.687163ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.66876642Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.674605289Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=5.837116ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.675209498Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.675494666Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=285.027µs
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=migrator t=2025-10-09T09:36:22.676124994Z level=info msg="migrations completed" performed=547 skipped=0 duration=1.163262146s
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=sqlstore t=2025-10-09T09:36:22.677101786Z level=info msg="Created default organization"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=secrets t=2025-10-09T09:36:22.678012422Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=plugin.store t=2025-10-09T09:36:22.692200484Z level=info msg="Loading plugins..."
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=local.finder t=2025-10-09T09:36:22.750632486Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=plugin.store t=2025-10-09T09:36:22.750652574Z level=info msg="Plugins loaded" count=55 duration=58.453062ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=query_data t=2025-10-09T09:36:22.758951652Z level=info msg="Query Service initialization"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=live.push_http t=2025-10-09T09:36:22.763037105Z level=info msg="Live Push Gateway initialization"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ngalert.migration t=2025-10-09T09:36:22.764905347Z level=info msg=Starting
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ngalert.migration t=2025-10-09T09:36:22.765239077Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ngalert.migration orgID=1 t=2025-10-09T09:36:22.765568698Z level=info msg="Migrating alerts for organisation"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ngalert.migration orgID=1 t=2025-10-09T09:36:22.766095852Z level=info msg="Alerts found to migrate" alerts=0
Oct  9 09:36:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v32: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 4.1 KiB/s wr, 307 op/s
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ngalert.migration t=2025-10-09T09:36:22.76770114Z level=info msg="Completed alerting migration"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ngalert.state.manager t=2025-10-09T09:36:22.782796661Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=infra.usagestats.collector t=2025-10-09T09:36:22.784206639Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=provisioning.datasources t=2025-10-09T09:36:22.7850658Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=provisioning.alerting t=2025-10-09T09:36:22.792551444Z level=info msg="starting to provision alerting"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=provisioning.alerting t=2025-10-09T09:36:22.792567264Z level=info msg="finished to provision alerting"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=grafanaStorageLogger t=2025-10-09T09:36:22.792687861Z level=info msg="Storage starting"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=http.server t=2025-10-09T09:36:22.795038844Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=http.server t=2025-10-09T09:36:22.795331115Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ngalert.state.manager t=2025-10-09T09:36:22.795393251Z level=info msg="Warming state cache for startup"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=provisioning.dashboard t=2025-10-09T09:36:22.803635913Z level=info msg="starting to provision dashboards"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ngalert.multiorg.alertmanager t=2025-10-09T09:36:22.80735752Z level=info msg="Starting MultiOrg Alertmanager"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ngalert.state.manager t=2025-10-09T09:36:22.819027644Z level=info msg="State cache has been initialized" states=0 duration=23.632469ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ngalert.scheduler t=2025-10-09T09:36:22.819069443Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ticker t=2025-10-09T09:36:22.819101213Z level=info msg=starting first_tick=2025-10-09T09:36:30Z
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=sqlstore.transactions t=2025-10-09T09:36:22.856961207Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=grafana.update.checker t=2025-10-09T09:36:22.858159436Z level=info msg="Update check succeeded" duration=50.360494ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=plugins.update.checker t=2025-10-09T09:36:22.85818321Z level=info msg="Update check succeeded" duration=55.950484ms
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=sqlstore.transactions t=2025-10-09T09:36:22.871309791Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=sqlstore.transactions t=2025-10-09T09:36:22.882808812Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=sqlstore.transactions t=2025-10-09T09:36:22.892999095Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Oct  9 09:36:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=sqlstore.transactions t=2025-10-09T09:36:22.904892951Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  9 09:36:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=provisioning.dashboard t=2025-10-09T09:36:23.002792098Z level=info msg="finished to provision dashboards"
Oct  9 09:36:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=grafana-apiserver t=2025-10-09T09:36:23.044521628Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct  9 09:36:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=grafana-apiserver t=2025-10-09T09:36:23.045007114Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct  9 09:36:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v33: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 3.4 KiB/s wr, 253 op/s
Oct  9 09:36:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:36:25.032Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002078018s
Oct  9 09:36:25 compute-0 podman[25413]: 2025-10-09 09:36:25.815912787 +0000 UTC m=+4.005958491 container create 51ede9f8e22270191b966dc7f7f3cdd99ebbf2b769c7a62d2c06e5448d947c07 (image=quay.io/ceph/haproxy:2.3, name=frosty_bohr)
Oct  9 09:36:25 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 11 completed events
Oct  9 09:36:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:36:25 compute-0 systemd[1]: Started libpod-conmon-51ede9f8e22270191b966dc7f7f3cdd99ebbf2b769c7a62d2c06e5448d947c07.scope.
Oct  9 09:36:25 compute-0 podman[25413]: 2025-10-09 09:36:25.806241543 +0000 UTC m=+3.996287257 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 09:36:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:25 compute-0 podman[25413]: 2025-10-09 09:36:25.862903057 +0000 UTC m=+4.052948761 container init 51ede9f8e22270191b966dc7f7f3cdd99ebbf2b769c7a62d2c06e5448d947c07 (image=quay.io/ceph/haproxy:2.3, name=frosty_bohr)
Oct  9 09:36:25 compute-0 podman[25413]: 2025-10-09 09:36:25.867777668 +0000 UTC m=+4.057823362 container start 51ede9f8e22270191b966dc7f7f3cdd99ebbf2b769c7a62d2c06e5448d947c07 (image=quay.io/ceph/haproxy:2.3, name=frosty_bohr)
Oct  9 09:36:25 compute-0 podman[25413]: 2025-10-09 09:36:25.868804814 +0000 UTC m=+4.058850518 container attach 51ede9f8e22270191b966dc7f7f3cdd99ebbf2b769c7a62d2c06e5448d947c07 (image=quay.io/ceph/haproxy:2.3, name=frosty_bohr)
Oct  9 09:36:25 compute-0 frosty_bohr[25516]: 0 0
Oct  9 09:36:25 compute-0 systemd[1]: libpod-51ede9f8e22270191b966dc7f7f3cdd99ebbf2b769c7a62d2c06e5448d947c07.scope: Deactivated successfully.
Oct  9 09:36:25 compute-0 podman[25413]: 2025-10-09 09:36:25.871431747 +0000 UTC m=+4.061477442 container died 51ede9f8e22270191b966dc7f7f3cdd99ebbf2b769c7a62d2c06e5448d947c07 (image=quay.io/ceph/haproxy:2.3, name=frosty_bohr)
Oct  9 09:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f16201bdffbcdf6195faf05af6d4c2488277b6c8b6398d767d01998c9389e3e-merged.mount: Deactivated successfully.
Oct  9 09:36:25 compute-0 podman[25413]: 2025-10-09 09:36:25.889751629 +0000 UTC m=+4.079797323 container remove 51ede9f8e22270191b966dc7f7f3cdd99ebbf2b769c7a62d2c06e5448d947c07 (image=quay.io/ceph/haproxy:2.3, name=frosty_bohr)
Oct  9 09:36:25 compute-0 systemd[1]: libpod-conmon-51ede9f8e22270191b966dc7f7f3cdd99ebbf2b769c7a62d2c06e5448d947c07.scope: Deactivated successfully.
Oct  9 09:36:25 compute-0 systemd[1]: Reloading.
Oct  9 09:36:25 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:25 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:26 compute-0 systemd[1]: Reloading.
Oct  9 09:36:26 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:26 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:26 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.kmcywb for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:26 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:26 compute-0 podman[25652]: 2025-10-09 09:36:26.501911555 +0000 UTC m=+0.028592973 container create 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be4f6adbde56b6b775c906dc32d3999dc325d568ef742a0bca68c480c09c026a/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:26 compute-0 podman[25652]: 2025-10-09 09:36:26.540336764 +0000 UTC m=+0.067018193 container init 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:36:26 compute-0 podman[25652]: 2025-10-09 09:36:26.544007455 +0000 UTC m=+0.070688874 container start 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:36:26 compute-0 bash[25652]: 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682
Oct  9 09:36:26 compute-0 podman[25652]: 2025-10-09 09:36:26.490870417 +0000 UTC m=+0.017551855 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 09:36:26 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.kmcywb for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb[25664]: [NOTICE] 281/093626 (2) : New worker #1 (4) forked
Oct  9 09:36:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000021s ======
Oct  9 09:36:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct  9 09:36:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 09:36:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:26 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.gkeojf on compute-2
Oct  9 09:36:26 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.gkeojf on compute-2
Oct  9 09:36:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v34: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 3.0 KiB/s wr, 225 op/s
Oct  9 09:36:27 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-0 ceph-mon[4497]: Deploying daemon haproxy.rgw.default.compute-2.gkeojf on compute-2
Oct  9 09:36:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:28.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v35: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 2.8 KiB/s wr, 211 op/s
Oct  9 09:36:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:29.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:36:29 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:36:29 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 09:36:29 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Oct  9 09:36:29 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:36:29 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:36:29 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:36:29 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:36:29 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.tcjodw on compute-2
Oct  9 09:36:29 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.tcjodw on compute-2
Oct  9 09:36:29 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:30.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v36: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:30 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:36:30 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:36:30 compute-0 ceph-mon[4497]: Deploying daemon keepalived.rgw.default.compute-2.tcjodw on compute-2
Oct  9 09:36:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:31.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:32.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v37: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:33.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:36:34 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:36:34 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 09:36:34 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:34 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:36:34 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:36:34 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:36:34 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:36:34 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.uozjha on compute-0
Oct  9 09:36:34 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.uozjha on compute-0
Oct  9 09:36:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:34.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v38: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:35 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:35 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:35 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:35 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:36:35 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:36:35 compute-0 ceph-mon[4497]: Deploying daemon keepalived.rgw.default.compute-0.uozjha on compute-0
Oct  9 09:36:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:35.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:36.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v39: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:37 compute-0 podman[25758]: 2025-10-09 09:36:37.255436387 +0000 UTC m=+2.766899781 container create d619ce96fc549551bd77d085faf5324c715f88363c3276998366ffd72ddb93a8 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_shamir, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, name=keepalived, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container)
Oct  9 09:36:37 compute-0 systemd[1]: Started libpod-conmon-d619ce96fc549551bd77d085faf5324c715f88363c3276998366ffd72ddb93a8.scope.
Oct  9 09:36:37 compute-0 podman[25758]: 2025-10-09 09:36:37.246465233 +0000 UTC m=+2.757928648 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 09:36:37 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:37 compute-0 podman[25758]: 2025-10-09 09:36:37.317925085 +0000 UTC m=+2.829388479 container init d619ce96fc549551bd77d085faf5324c715f88363c3276998366ffd72ddb93a8 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_shamir, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, architecture=x86_64, name=keepalived, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-type=git, com.redhat.component=keepalived-container, release=1793, build-date=2023-02-22T09:23:20)
Oct  9 09:36:37 compute-0 podman[25758]: 2025-10-09 09:36:37.322770569 +0000 UTC m=+2.834233963 container start d619ce96fc549551bd77d085faf5324c715f88363c3276998366ffd72ddb93a8 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_shamir, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, version=2.2.4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.buildah.version=1.28.2, release=1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  9 09:36:37 compute-0 podman[25758]: 2025-10-09 09:36:37.323889249 +0000 UTC m=+2.835352643 container attach d619ce96fc549551bd77d085faf5324c715f88363c3276998366ffd72ddb93a8 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_shamir, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, architecture=x86_64, name=keepalived, release=1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=2.2.4)
Oct  9 09:36:37 compute-0 amazing_shamir[25839]: 0 0
Oct  9 09:36:37 compute-0 systemd[1]: libpod-d619ce96fc549551bd77d085faf5324c715f88363c3276998366ffd72ddb93a8.scope: Deactivated successfully.
Oct  9 09:36:37 compute-0 podman[25758]: 2025-10-09 09:36:37.326655293 +0000 UTC m=+2.838118688 container died d619ce96fc549551bd77d085faf5324c715f88363c3276998366ffd72ddb93a8 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_shamir, release=1793, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., architecture=x86_64, name=keepalived, version=2.2.4, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived)
Oct  9 09:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2825e640f5018f16eeebd509e7e14907dfe2f31ed6c6f0e7264209b91b23eea-merged.mount: Deactivated successfully.
Oct  9 09:36:37 compute-0 podman[25758]: 2025-10-09 09:36:37.344483635 +0000 UTC m=+2.855947028 container remove d619ce96fc549551bd77d085faf5324c715f88363c3276998366ffd72ddb93a8 (image=quay.io/ceph/keepalived:2.2.4, name=amazing_shamir, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-type=git, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph)
Oct  9 09:36:37 compute-0 systemd[1]: libpod-conmon-d619ce96fc549551bd77d085faf5324c715f88363c3276998366ffd72ddb93a8.scope: Deactivated successfully.
Oct  9 09:36:37 compute-0 systemd[1]: Reloading.
Oct  9 09:36:37 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:37 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:37 compute-0 systemd[1]: Reloading.
Oct  9 09:36:37 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:37 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:37.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:37 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.uozjha for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:37 compute-0 podman[25973]: 2025-10-09 09:36:37.938577159 +0000 UTC m=+0.024099445 container create 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, name=keepalived, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, description=keepalived for Ceph, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, architecture=x86_64, release=1793, distribution-scope=public)
Oct  9 09:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c487121b2532c47d02d3710ee4d66a03a355b0021ca5fd64cdab0da0ffdf5ea/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:37 compute-0 podman[25973]: 2025-10-09 09:36:37.970977893 +0000 UTC m=+0.056500179 container init 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, distribution-scope=public, version=2.2.4)
Oct  9 09:36:37 compute-0 podman[25973]: 2025-10-09 09:36:37.97451446 +0000 UTC m=+0.060036747 container start 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, description=keepalived for Ceph, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., name=keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct  9 09:36:37 compute-0 bash[25973]: 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043
Oct  9 09:36:37 compute-0 podman[25973]: 2025-10-09 09:36:37.928553381 +0000 UTC m=+0.014075687 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 09:36:37 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.uozjha for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:36:37 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct  9 09:36:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:36:37 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct  9 09:36:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:36:37 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct  9 09:36:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:36:37 2025: Configuration file /etc/keepalived/keepalived.conf
Oct  9 09:36:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:36:37 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct  9 09:36:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:36:37 2025: Starting VRRP child process, pid=4
Oct  9 09:36:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:36:37 2025: Startup complete
Oct  9 09:36:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:36:37 2025: (VI_0) Entering BACKUP STATE (init)
Oct  9 09:36:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:36:37 2025: VRRP_Script(check_backend) succeeded
Oct  9 09:36:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 09:36:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 484fa2be-f1f4-4539-8ed7-b9c81f8f1a26 (Updating ingress.rgw.default deployment (+4 -> 4))
Oct  9 09:36:38 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 484fa2be-f1f4-4539-8ed7-b9c81f8f1a26 (Updating ingress.rgw.default deployment (+4 -> 4)) in 17 seconds
Oct  9 09:36:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Oct  9 09:36:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 52ed216b-573e-427f-bb23-406cf74edf4e (Updating prometheus deployment (+1 -> 1))
Oct  9 09:36:38 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Oct  9 09:36:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Oct  9 09:36:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:38.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v40: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:39 compute-0 ceph-mon[4497]: Deploying daemon prometheus.compute-0 on compute-0
Oct  9 09:36:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:39.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:40.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v41: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:40 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 12 completed events
Oct  9 09:36:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:36:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:36:41 2025: (VI_0) Entering MASTER STATE
Oct  9 09:36:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:41.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:41 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:42 compute-0 podman[26078]: 2025-10-09 09:36:42.068802219 +0000 UTC m=+3.472537656 volume create a1d8139eec98a427002a21336808e8410d95daea02178a55096c5d4a0f45e6f5
Oct  9 09:36:42 compute-0 podman[26078]: 2025-10-09 09:36:42.073016635 +0000 UTC m=+3.476752082 container create ea1b7eeb667ce07bc2f74bbb41c9139eb217ec01265dde2b06b628a1fe46f333 (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_chaplygin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 systemd[1]: Started libpod-conmon-ea1b7eeb667ce07bc2f74bbb41c9139eb217ec01265dde2b06b628a1fe46f333.scope.
Oct  9 09:36:42 compute-0 podman[26078]: 2025-10-09 09:36:42.059656508 +0000 UTC m=+3.463391944 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct  9 09:36:42 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37eb82b161ce4f0c41d295398674cb5d275003d524f13910fa475b45d00094c9/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:42 compute-0 podman[26078]: 2025-10-09 09:36:42.127570073 +0000 UTC m=+3.531305510 container init ea1b7eeb667ce07bc2f74bbb41c9139eb217ec01265dde2b06b628a1fe46f333 (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_chaplygin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 podman[26078]: 2025-10-09 09:36:42.131723753 +0000 UTC m=+3.535459191 container start ea1b7eeb667ce07bc2f74bbb41c9139eb217ec01265dde2b06b628a1fe46f333 (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_chaplygin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 podman[26078]: 2025-10-09 09:36:42.132871388 +0000 UTC m=+3.536606844 container attach ea1b7eeb667ce07bc2f74bbb41c9139eb217ec01265dde2b06b628a1fe46f333 (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_chaplygin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 affectionate_chaplygin[26299]: 65534 65534
Oct  9 09:36:42 compute-0 systemd[1]: libpod-ea1b7eeb667ce07bc2f74bbb41c9139eb217ec01265dde2b06b628a1fe46f333.scope: Deactivated successfully.
Oct  9 09:36:42 compute-0 conmon[26299]: conmon ea1b7eeb667ce07bc2f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea1b7eeb667ce07bc2f74bbb41c9139eb217ec01265dde2b06b628a1fe46f333.scope/container/memory.events
Oct  9 09:36:42 compute-0 podman[26078]: 2025-10-09 09:36:42.134609935 +0000 UTC m=+3.538345371 container died ea1b7eeb667ce07bc2f74bbb41c9139eb217ec01265dde2b06b628a1fe46f333 (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_chaplygin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-37eb82b161ce4f0c41d295398674cb5d275003d524f13910fa475b45d00094c9-merged.mount: Deactivated successfully.
Oct  9 09:36:42 compute-0 podman[26078]: 2025-10-09 09:36:42.149524152 +0000 UTC m=+3.553259589 container remove ea1b7eeb667ce07bc2f74bbb41c9139eb217ec01265dde2b06b628a1fe46f333 (image=quay.io/prometheus/prometheus:v2.51.0, name=affectionate_chaplygin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 podman[26078]: 2025-10-09 09:36:42.150970589 +0000 UTC m=+3.554706036 volume remove a1d8139eec98a427002a21336808e8410d95daea02178a55096c5d4a0f45e6f5
Oct  9 09:36:42 compute-0 systemd[1]: libpod-conmon-ea1b7eeb667ce07bc2f74bbb41c9139eb217ec01265dde2b06b628a1fe46f333.scope: Deactivated successfully.
Oct  9 09:36:42 compute-0 podman[26313]: 2025-10-09 09:36:42.198884129 +0000 UTC m=+0.027652935 volume create d6de939f9b752728890344e45b96802ae01f4a1eadf8d011a6e216d428ec7b15
Oct  9 09:36:42 compute-0 podman[26313]: 2025-10-09 09:36:42.202625603 +0000 UTC m=+0.031394408 container create 7139e316c585d3a689555cc5ed992ca10684e93d6bdcd15408d0cf8157e59e20 (image=quay.io/prometheus/prometheus:v2.51.0, name=eloquent_heyrovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 systemd[1]: Started libpod-conmon-7139e316c585d3a689555cc5ed992ca10684e93d6bdcd15408d0cf8157e59e20.scope.
Oct  9 09:36:42 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f38ca83ccb0a6f81817acdce6379969f582e0cc2bbb3467cd909682462e4b2/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:42 compute-0 podman[26313]: 2025-10-09 09:36:42.251845134 +0000 UTC m=+0.080613950 container init 7139e316c585d3a689555cc5ed992ca10684e93d6bdcd15408d0cf8157e59e20 (image=quay.io/prometheus/prometheus:v2.51.0, name=eloquent_heyrovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 podman[26313]: 2025-10-09 09:36:42.256105186 +0000 UTC m=+0.084873992 container start 7139e316c585d3a689555cc5ed992ca10684e93d6bdcd15408d0cf8157e59e20 (image=quay.io/prometheus/prometheus:v2.51.0, name=eloquent_heyrovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 eloquent_heyrovsky[26327]: 65534 65534
Oct  9 09:36:42 compute-0 systemd[1]: libpod-7139e316c585d3a689555cc5ed992ca10684e93d6bdcd15408d0cf8157e59e20.scope: Deactivated successfully.
Oct  9 09:36:42 compute-0 podman[26313]: 2025-10-09 09:36:42.258063599 +0000 UTC m=+0.086832404 container attach 7139e316c585d3a689555cc5ed992ca10684e93d6bdcd15408d0cf8157e59e20 (image=quay.io/prometheus/prometheus:v2.51.0, name=eloquent_heyrovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 podman[26313]: 2025-10-09 09:36:42.258241493 +0000 UTC m=+0.087010309 container died 7139e316c585d3a689555cc5ed992ca10684e93d6bdcd15408d0cf8157e59e20 (image=quay.io/prometheus/prometheus:v2.51.0, name=eloquent_heyrovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-08f38ca83ccb0a6f81817acdce6379969f582e0cc2bbb3467cd909682462e4b2-merged.mount: Deactivated successfully.
Oct  9 09:36:42 compute-0 podman[26313]: 2025-10-09 09:36:42.276844515 +0000 UTC m=+0.105613321 container remove 7139e316c585d3a689555cc5ed992ca10684e93d6bdcd15408d0cf8157e59e20 (image=quay.io/prometheus/prometheus:v2.51.0, name=eloquent_heyrovsky, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 podman[26313]: 2025-10-09 09:36:42.278282836 +0000 UTC m=+0.107051652 volume remove d6de939f9b752728890344e45b96802ae01f4a1eadf8d011a6e216d428ec7b15
Oct  9 09:36:42 compute-0 podman[26313]: 2025-10-09 09:36:42.188534407 +0000 UTC m=+0.017303233 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct  9 09:36:42 compute-0 systemd[1]: libpod-conmon-7139e316c585d3a689555cc5ed992ca10684e93d6bdcd15408d0cf8157e59e20.scope: Deactivated successfully.
Oct  9 09:36:42 compute-0 systemd[1]: Reloading.
Oct  9 09:36:42 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:42 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:42 compute-0 systemd[1]: Reloading.
Oct  9 09:36:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:42.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:42 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:42 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:42 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v42: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:42 compute-0 podman[26458]: 2025-10-09 09:36:42.882529712 +0000 UTC m=+0.029820159 container create ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4889c287c32a1412e12960996b9e37960c1d865f3f2241c6f7fd497267c6b7f/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4889c287c32a1412e12960996b9e37960c1d865f3f2241c6f7fd497267c6b7f/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:42 compute-0 podman[26458]: 2025-10-09 09:36:42.925000561 +0000 UTC m=+0.072291028 container init ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 podman[26458]: 2025-10-09 09:36:42.928539303 +0000 UTC m=+0.075829749 container start ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:42 compute-0 bash[26458]: ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2
Oct  9 09:36:42 compute-0 podman[26458]: 2025-10-09 09:36:42.869121364 +0000 UTC m=+0.016411831 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Oct  9 09:36:42 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.953Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.953Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.953Z caller=main.go:623 level=info host_details="(Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 x86_64 compute-0 (none))"
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.953Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.953Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.955Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.956Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.959Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.959Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.753µs
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.959Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.960Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.960Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.960Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.960Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=25.718µs wal_replay_duration=524.028µs wbl_replay_duration=130ns total_replay_duration=570.636µs
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.961Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.961Z caller=main.go:1153 level=info msg="TSDB started"
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.962Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Oct  9 09:36:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:42 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:42 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.983Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=21.280228ms db_storage=882ns remote_storage=1.072µs web_handler=611ns query_engine=582ns scrape=3.568507ms scrape_sd=112.732µs notify=12.735µs notify_sd=9.498µs rules=17.171052ms tracing=4.94µs
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.983Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Oct  9 09:36:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0[26470]: ts=2025-10-09T09:36:42.983Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Oct  9 09:36:42 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:42 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 52ed216b-573e-427f-bb23-406cf74edf4e (Updating prometheus deployment (+1 -> 1))
Oct  9 09:36:42 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 52ed216b-573e-427f-bb23-406cf74edf4e (Updating prometheus deployment (+1 -> 1)) in 5 seconds
Oct  9 09:36:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Oct  9 09:36:42 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct  9 09:36:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:43.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:43 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:43 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:43 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:43 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct  9 09:36:43 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  1: '-n'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  2: 'mgr.compute-0.lwqgfy'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  3: '-f'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  4: '--setuser'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  5: 'ceph'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  6: '--setgroup'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  7: 'ceph'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 09:36:43 compute-0 ceph-mgr[4772]: mgr respawn  exe_path /proc/self/exe
Oct  9 09:36:44 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.lwqgfy(active, since 53s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:36:44 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Oct  9 09:36:44 compute-0 systemd[1]: session-20.scope: Consumed 33.263s CPU time.
Oct  9 09:36:44 compute-0 systemd-logind[798]: Session 20 logged out. Waiting for processes to exit.
Oct  9 09:36:44 compute-0 systemd-logind[798]: Removed session 20.
Oct  9 09:36:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ignoring --setuser ceph since I am not root
Oct  9 09:36:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ignoring --setgroup ceph since I am not root
Oct  9 09:36:44 compute-0 ceph-mgr[4772]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:36:44 compute-0 ceph-mgr[4772]: pidfile_write: ignore empty --pid-file
Oct  9 09:36:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'alerts'
Oct  9 09:36:44 compute-0 ceph-mgr[4772]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:44.178+0000 7f4a3a14a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'balancer'
Oct  9 09:36:44 compute-0 ceph-mgr[4772]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:44.249+0000 7f4a3a14a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'cephadm'
Oct  9 09:36:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:44.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'crash'
Oct  9 09:36:44 compute-0 ceph-mgr[4772]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:44.918+0000 7f4a3a14a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'dashboard'
Oct  9 09:36:44 compute-0 ceph-mon[4497]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct  9 09:36:45 compute-0 python3[26540]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:36:45 compute-0 podman[26541]: 2025-10-09 09:36:45.328077809 +0000 UTC m=+0.027153012 container create 7ae1984bff0059c2b6c646c36717bdbf1fd6046e79f92b27dfad81daef40dd0e (image=quay.io/ceph/ceph:v19, name=trusting_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 09:36:45 compute-0 systemd[1]: Started libpod-conmon-7ae1984bff0059c2b6c646c36717bdbf1fd6046e79f92b27dfad81daef40dd0e.scope.
Oct  9 09:36:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0342cae404155f4eeb2034c1b7247703e8153050eefc4aa20382bb3066d310ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0342cae404155f4eeb2034c1b7247703e8153050eefc4aa20382bb3066d310ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:45 compute-0 podman[26541]: 2025-10-09 09:36:45.380491132 +0000 UTC m=+0.079566336 container init 7ae1984bff0059c2b6c646c36717bdbf1fd6046e79f92b27dfad81daef40dd0e (image=quay.io/ceph/ceph:v19, name=trusting_black, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:45 compute-0 podman[26541]: 2025-10-09 09:36:45.385190833 +0000 UTC m=+0.084266036 container start 7ae1984bff0059c2b6c646c36717bdbf1fd6046e79f92b27dfad81daef40dd0e (image=quay.io/ceph/ceph:v19, name=trusting_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:36:45 compute-0 podman[26541]: 2025-10-09 09:36:45.388243177 +0000 UTC m=+0.087318400 container attach 7ae1984bff0059c2b6c646c36717bdbf1fd6046e79f92b27dfad81daef40dd0e (image=quay.io/ceph/ceph:v19, name=trusting_black, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 09:36:45 compute-0 podman[26541]: 2025-10-09 09:36:45.317481132 +0000 UTC m=+0.016556345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:36:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:36:45 compute-0 trusting_black[26553]: could not fetch user info: no user info saved
Oct  9 09:36:45 compute-0 systemd[1]: libpod-7ae1984bff0059c2b6c646c36717bdbf1fd6046e79f92b27dfad81daef40dd0e.scope: Deactivated successfully.
Oct  9 09:36:45 compute-0 podman[26541]: 2025-10-09 09:36:45.492367589 +0000 UTC m=+0.191442792 container died 7ae1984bff0059c2b6c646c36717bdbf1fd6046e79f92b27dfad81daef40dd0e (image=quay.io/ceph/ceph:v19, name=trusting_black, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:36:45 compute-0 ceph-mgr[4772]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:36:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:45.491+0000 7f4a3a14a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0342cae404155f4eeb2034c1b7247703e8153050eefc4aa20382bb3066d310ba-merged.mount: Deactivated successfully.
Oct  9 09:36:45 compute-0 podman[26541]: 2025-10-09 09:36:45.50973988 +0000 UTC m=+0.208815084 container remove 7ae1984bff0059c2b6c646c36717bdbf1fd6046e79f92b27dfad81daef40dd0e (image=quay.io/ceph/ceph:v19, name=trusting_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:45 compute-0 systemd[1]: libpod-conmon-7ae1984bff0059c2b6c646c36717bdbf1fd6046e79f92b27dfad81daef40dd0e.scope: Deactivated successfully.
Oct  9 09:36:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:36:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:36:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  from numpy import show_config as show_numpy_config
Oct  9 09:36:45 compute-0 ceph-mgr[4772]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:45.644+0000 7f4a3a14a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'influx'
Oct  9 09:36:45 compute-0 ceph-mgr[4772]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'insights'
Oct  9 09:36:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:45.711+0000 7f4a3a14a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-0 python3[26675]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:36:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:45.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:45 compute-0 podman[26676]: 2025-10-09 09:36:45.771292126 +0000 UTC m=+0.026965197 container create 916792882daf16b279f8c358ae350d4195a26338b72335298de5d2df959e0538 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 09:36:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'iostat'
Oct  9 09:36:45 compute-0 systemd[1]: Started libpod-conmon-916792882daf16b279f8c358ae350d4195a26338b72335298de5d2df959e0538.scope.
Oct  9 09:36:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbf0b1b284444fc3ec3708911f4a4004ade40f0c7656629a9bde1f785a91a45/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbf0b1b284444fc3ec3708911f4a4004ade40f0c7656629a9bde1f785a91a45/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:45 compute-0 podman[26676]: 2025-10-09 09:36:45.82922801 +0000 UTC m=+0.084901082 container init 916792882daf16b279f8c358ae350d4195a26338b72335298de5d2df959e0538 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:36:45 compute-0 podman[26676]: 2025-10-09 09:36:45.833011633 +0000 UTC m=+0.088684706 container start 916792882daf16b279f8c358ae350d4195a26338b72335298de5d2df959e0538 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 09:36:45 compute-0 podman[26676]: 2025-10-09 09:36:45.833974779 +0000 UTC m=+0.089647850 container attach 916792882daf16b279f8c358ae350d4195a26338b72335298de5d2df959e0538 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:45 compute-0 ceph-mgr[4772]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:36:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:45.844+0000 7f4a3a14a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-0 podman[26676]: 2025-10-09 09:36:45.760518635 +0000 UTC m=+0.016191727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]: {
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "user_id": "openstack",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "display_name": "openstack",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "email": "",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "suspended": 0,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "max_buckets": 1000,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "subusers": [],
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "keys": [
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        {
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:            "user": "openstack",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:            "access_key": "HTWAVERV8N89XXXED0E3",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:            "secret_key": "XB4QdOJkCaIVwsEvHtWsRV44UolwLa4hgrMMioNP",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:            "active": true,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:            "create_date": "2025-10-09T09:36:45.927554Z"
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        }
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    ],
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "swift_keys": [],
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "caps": [],
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "op_mask": "read, write, delete",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "default_placement": "",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "default_storage_class": "",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "placement_tags": [],
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "bucket_quota": {
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        "enabled": false,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        "check_on_raw": false,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        "max_size": -1,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        "max_size_kb": 0,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        "max_objects": -1
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    },
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "user_quota": {
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        "enabled": false,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        "check_on_raw": false,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        "max_size": -1,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        "max_size_kb": 0,
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:        "max_objects": -1
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    },
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "temp_url_keys": [],
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "type": "rgw",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "mfa_ids": [],
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "account_id": "",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "path": "/",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "create_date": "2025-10-09T09:36:45.927259Z",
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "tags": [],
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]:    "group_ids": []
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]: }
Oct  9 09:36:45 compute-0 admiring_meninsky[26688]: 
Oct  9 09:36:45 compute-0 systemd[1]: libpod-916792882daf16b279f8c358ae350d4195a26338b72335298de5d2df959e0538.scope: Deactivated successfully.
Oct  9 09:36:45 compute-0 podman[26676]: 2025-10-09 09:36:45.953904198 +0000 UTC m=+0.209577281 container died 916792882daf16b279f8c358ae350d4195a26338b72335298de5d2df959e0538 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 09:36:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dbf0b1b284444fc3ec3708911f4a4004ade40f0c7656629a9bde1f785a91a45-merged.mount: Deactivated successfully.
Oct  9 09:36:45 compute-0 podman[26676]: 2025-10-09 09:36:45.971284164 +0000 UTC m=+0.226957236 container remove 916792882daf16b279f8c358ae350d4195a26338b72335298de5d2df959e0538 (image=quay.io/ceph/ceph:v19, name=admiring_meninsky, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:36:45 compute-0 systemd[1]: libpod-conmon-916792882daf16b279f8c358ae350d4195a26338b72335298de5d2df959e0538.scope: Deactivated successfully.
Oct  9 09:36:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'localpool'
Oct  9 09:36:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:36:46 compute-0 python3[26808]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:36:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'mirroring'
Oct  9 09:36:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'nfs'
Oct  9 09:36:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:46.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:46 compute-0 ceph-mgr[4772]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:36:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:46.745+0000 7f4a3a14a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-0 ceph-mgr[4772]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:46.943+0000 7f4a3a14a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'osd_support'
Oct  9 09:36:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:47.013+0000 7f4a3a14a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:47.076+0000 7f4a3a14a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'progress'
Oct  9 09:36:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:47.145+0000 7f4a3a14a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'prometheus'
Oct  9 09:36:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:47.210+0000 7f4a3a14a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:36:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:47.518+0000 7f4a3a14a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'restful'
Oct  9 09:36:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:47.605+0000 7f4a3a14a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:47.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rgw'
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'rook'
Oct  9 09:36:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:47.993+0000 7f4a3a14a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'selftest'
Oct  9 09:36:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:48.485+0000 7f4a3a14a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:48.548+0000 7f4a3a14a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:36:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:48.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:48.620+0000 7f4a3a14a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'stats'
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'status'
Oct  9 09:36:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:48.752+0000 7f4a3a14a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telegraf'
Oct  9 09:36:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:48.818+0000 7f4a3a14a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'telemetry'
Oct  9 09:36:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:48.955+0000 7f4a3a14a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:49.146+0000 7f4a3a14a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'volumes'
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.etokpp restarted
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.etokpp started
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.takdnm restarted
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.takdnm started
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.lwqgfy(active, since 58s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:49.390+0000 7f4a3a14a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr[py] Loading python module 'zabbix'
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:49.453+0000 7f4a3a14a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.lwqgfy
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: ms_deliver_dispatch: unhandled message 0x55e08f30b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.lwqgfy(active, starting, since 0.0142687s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr handle_mgr_map Activating!
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr handle_mgr_map I am now activating
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.zfggbi"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zfggbi"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e9 all = 0
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.wjwyle"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjwyle"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e9 all = 0
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.svghvn"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.svghvn"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e9 all = 0
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).mds e9 all = 1
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: balancer
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Manager daemon compute-0.lwqgfy is now available
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Starting
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:36:49
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: cephadm
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: crash
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: dashboard
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO access_control] Loading user roles DB version=2
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO sso] Loading SSO DB version=1
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: devicehealth
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Starting
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: iostat
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: nfs
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: orchestrator
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: pg_autoscaler
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: progress
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [progress INFO root] Loading...
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f49d9c790a0>, <progress.module.GhostEvent object at 0x7f49d9c790d0>, <progress.module.GhostEvent object at 0x7f49d9c79100>, <progress.module.GhostEvent object at 0x7f49d9c79130>, <progress.module.GhostEvent object at 0x7f49d9c79160>, <progress.module.GhostEvent object at 0x7f49d9c79190>, <progress.module.GhostEvent object at 0x7f49d9c791c0>, <progress.module.GhostEvent object at 0x7f49d9c791f0>, <progress.module.GhostEvent object at 0x7f49d9c79220>, <progress.module.GhostEvent object at 0x7f49d9c79250>, <progress.module.GhostEvent object at 0x7f49d9c79280>, <progress.module.GhostEvent object at 0x7f49d9c792b0>] historic events
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: prometheus
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [progress INFO root] Loaded OSDMap, ready.
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [prometheus INFO root] Cache enabled
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [prometheus INFO root] starting metric collection thread
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [prometheus INFO root] Starting engine...
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: [09/Oct/2025:09:36:49] ENGINE Bus STARTING
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Bus STARTING
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: CherryPy Checker:
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: The Application mounted at '' has an empty config.
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] recovery thread starting
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] starting setup
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: rbd_support
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: restful
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: status
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: telemetry
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [restful INFO root] server_addr: :: server_port: 8003
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [restful WARNING root] server not running: no certificate configured
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] PerfHandler: starting
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: vms, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: volumes, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: backups, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_task_task: images, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TaskHandler: starting
Oct  9 09:36:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"} v 0)
Oct  9 09:36:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: mgr load Constructed class from module: volumes
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:49.710+0000 7f49c31e5640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:49.712+0000 7f49be79c640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:49.712+0000 7f49be79c640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:49.712+0000 7f49be79c640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:49.712+0000 7f49be79c640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:49.712+0000 7f49be79c640 -1 client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: client.0 error registering admin socket command: (17) File exists
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] setup complete
Oct  9 09:36:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:49.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:49 compute-0 systemd-logind[798]: New session 21 of user ceph-admin.
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Oct  9 09:36:49 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: [09/Oct/2025:09:36:49] ENGINE Serving on http://:::9283
Oct  9 09:36:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: [09/Oct/2025:09:36:49] ENGINE Bus STARTED
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Serving on http://:::9283
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Bus STARTED
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [prometheus INFO root] Engine started.
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Oct  9 09:36:49 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Oct  9 09:36:50 compute-0 ceph-mgr[4772]: [dashboard INFO dashboard.module] Engine started.
Oct  9 09:36:50 compute-0 ceph-mon[4497]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:36:50 compute-0 ceph-mon[4497]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:36:50 compute-0 ceph-mon[4497]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:36:50 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:50 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:36:50 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:36:50 compute-0 podman[27090]: 2025-10-09 09:36:50.435439509 +0000 UTC m=+0.043085899 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.lwqgfy(active, since 1.02937s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:36:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v3: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:50 compute-0 podman[27090]: 2025-10-09 09:36:50.527109212 +0000 UTC m=+0.134755581 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:36:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:50.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:50 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:36:50] ENGINE Bus STARTING
Oct  9 09:36:50 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:36:50] ENGINE Bus STARTING
Oct  9 09:36:50 compute-0 podman[27195]: 2025-10-09 09:36:50.857476872 +0000 UTC m=+0.037370846 container exec f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:50 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:36:50] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:36:50 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:36:50] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:36:50 compute-0 podman[27215]: 2025-10-09 09:36:50.918247762 +0000 UTC m=+0.048965775 container exec_died f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:50 compute-0 podman[27195]: 2025-10-09 09:36:50.921851044 +0000 UTC m=+0.101745017 container exec_died f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:36:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:36:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:36:51] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:36:51 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:36:51] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:36:51 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:36:51] ENGINE Bus STARTED
Oct  9 09:36:51 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:36:51] ENGINE Bus STARTED
Oct  9 09:36:51 compute-0 ceph-mgr[4772]: [cephadm INFO cherrypy.error] [09/Oct/2025:09:36:51] ENGINE Client ('192.168.122.100', 39912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:36:51 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : [09/Oct/2025:09:36:51] ENGINE Client ('192.168.122.100', 39912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:36:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:51 compute-0 podman[27291]: 2025-10-09 09:36:51.179284885 +0000 UTC m=+0.034928459 container exec bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:51 compute-0 podman[27291]: 2025-10-09 09:36:51.199330858 +0000 UTC m=+0.054974422 container exec_died bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-0 podman[27348]: 2025-10-09 09:36:51.342484921 +0000 UTC m=+0.034408901 container exec 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:51 compute-0 podman[27348]: 2025-10-09 09:36:51.464381287 +0000 UTC m=+0.156305268 container exec_died 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: [09/Oct/2025:09:36:50] ENGINE Bus STARTING
Oct  9 09:36:51 compute-0 ceph-mon[4497]: [09/Oct/2025:09:36:50] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:36:51 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-0 ceph-mon[4497]: [09/Oct/2025:09:36:51] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:36:51 compute-0 ceph-mon[4497]: [09/Oct/2025:09:36:51] ENGINE Bus STARTED
Oct  9 09:36:51 compute-0 ceph-mon[4497]: [09/Oct/2025:09:36:51] ENGINE Client ('192.168.122.100', 39912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:36:51 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v4: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:51 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Check health
Oct  9 09:36:51 compute-0 podman[27417]: 2025-10-09 09:36:51.605478794 +0000 UTC m=+0.035840800 container exec 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:36:51 compute-0 podman[27417]: 2025-10-09 09:36:51.61127423 +0000 UTC m=+0.041636236 container exec_died 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  9 09:36:51 compute-0 podman[27470]: 2025-10-09 09:36:51.752225131 +0000 UTC m=+0.038185620 container exec 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.tags=Ceph keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, version=2.2.4, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph)
Oct  9 09:36:51 compute-0 podman[27470]: 2025-10-09 09:36:51.757538778 +0000 UTC m=+0.043499268 container exec_died 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, io.openshift.tags=Ceph keepalived, version=2.2.4, release=1793, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, description=keepalived for Ceph, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct  9 09:36:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:51.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:51 compute-0 podman[27521]: 2025-10-09 09:36:51.89557365 +0000 UTC m=+0.034419470 container exec ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:51 compute-0 podman[27521]: 2025-10-09 09:36:51.920371782 +0000 UTC m=+0.059217582 container exec_died ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:36:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:36:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.lwqgfy(active, since 2s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:36:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:36:52] "GET /metrics HTTP/1.1" 200 46560 "" "Prometheus/2.51.0"
Oct  9 09:36:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:36:52] "GET /metrics HTTP/1.1" 200 46560 "" "Prometheus/2.51.0"
Oct  9 09:36:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:52.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  9 09:36:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  9 09:36:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:36:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:36:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 09:36:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:53 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:36:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v5: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:36:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:53.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:53 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.lwqgfy(active, since 4s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:54.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 3d02fa76-1487-40e9-af3a-ad7e7be94e62 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [progress INFO root] fail: finished ev 3d02fa76-1487-40e9-af3a-ad7e7be94e62 (Updating ingress.nfs.cephfs deployment (+6 -> 6)): max() arg is an empty sequence
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 3d02fa76-1487-40e9-af3a-ad7e7be94e62 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 0 seconds
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [cephadm ERROR cephadm.serve] Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress#012service_id: nfs.cephfs#012service_name: ingress.nfs.cephfs#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012spec:#012  backend_service: nfs.cephfs#012  enable_haproxy_protocol: true#012  first_virtual_router_id: 50#012  frontend_port: 2049#012  monitor_port: 9049#012  virtual_ip: 192.168.122.2/24#012''')): max() arg is an empty sequence#012Traceback (most recent call last):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services#012    if self._apply_service(spec):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service#012    daemon_spec = svc.prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create#012    return self.haproxy_prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create#012    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config#012    num_ranks = 1 + max(by_rank.keys())#012ValueError: max() arg is an empty sequence
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T09:36:54.693+0000 7f49e8573640 -1 log_channel(cephadm) log [ERR] : Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: service_id: nfs.cephfs
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: service_name: ingress.nfs.cephfs
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: placement:
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  hosts:
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  - compute-0
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  - compute-1
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  - compute-2
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: spec:
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  backend_service: nfs.cephfs
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  enable_haproxy_protocol: true
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  first_virtual_router_id: 50
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  frontend_port: 2049
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  monitor_port: 9049
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  virtual_ip: 192.168.122.2/24
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ''')): max() arg is an empty sequence
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: Traceback (most recent call last):
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:    if self._apply_service(spec):
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:    daemon_spec = svc.prepare_create(daemon_spec)
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:    return self.haproxy_prepare_create(daemon_spec)
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]:    num_ranks = 1 + max(by_rank.keys())
Oct  9 09:36:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ValueError: max() arg is an empty sequence
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [ERR] : Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress#012service_id: nfs.cephfs#012service_name: ingress.nfs.cephfs#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012spec:#012  backend_service: nfs.cephfs#012  enable_haproxy_protocol: true#012  first_virtual_router_id: 50#012  frontend_port: 2049#012  monitor_port: 9049#012  virtual_ip: 192.168.122.2/24#012''')): max() arg is an empty sequence#012Traceback (most recent call last):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services#012    if self._apply_service(spec):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service#012    daemon_spec = svc.prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create#012    return self.haproxy_prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create#012    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config#012    num_ranks = 1 + max(by_rank.keys())#012ValueError: max() arg is an empty sequence
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v6: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 5b7bc462-2530-416c-8eaa-4cce25a967df (Updating nfs.cephfs deployment (+3 -> 3))
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.douegr
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.douegr
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:54 compute-0 ceph-mon[4497]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:36:54 compute-0 ceph-mon[4497]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:36:54 compute-0 ceph-mon[4497]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:36:54 compute-0 ceph-mon[4497]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:54 compute-0 ceph-mon[4497]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:54 compute-0 ceph-mon[4497]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:54 compute-0 ceph-mon[4497]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:54 compute-0 ceph-mon[4497]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:54 compute-0 ceph-mon[4497]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:36:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.douegr-rgw
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.douegr-rgw
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.douegr's ganesha conf is defaulting to empty
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.douegr's ganesha conf is defaulting to empty
Oct  9 09:36:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:54 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.douegr on compute-1
Oct  9 09:36:54 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.douegr on compute-1
Oct  9 09:36:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress#012service_id: nfs.cephfs#012service_name: ingress.nfs.cephfs#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012spec:#012  backend_service: nfs.cephfs#012  enable_haproxy_protocol: true#012  first_virtual_router_id: 50#012  frontend_port: 2049#012  monitor_port: 9049#012  virtual_ip: 192.168.122.2/24#012''')): max() arg is an empty sequence#012Traceback (most recent call last):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services#012    if self._apply_service(spec):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service#012    daemon_spec = svc.prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create#012    return self.haproxy_prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create#012    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config#012    num_ranks = 1 + max(by_rank.keys())#012ValueError: max() arg is an empty sequence
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Creating key for client.nfs.cephfs.0.0.compute-1.douegr
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  9 09:36:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:36:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Rados config object exists: conf-nfs.cephfs
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Creating key for client.nfs.cephfs.0.0.compute-1.douegr-rgw
Oct  9 09:36:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Bind address in nfs.cephfs.0.0.compute-1.douegr's ganesha conf is defaulting to empty
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Deploying daemon nfs.cephfs.0.0.compute-1.douegr on compute-1
Oct  9 09:36:55 compute-0 ceph-mon[4497]: Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Oct  9 09:36:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:55.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:36:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:36:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:36:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:55 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.cpioam
Oct  9 09:36:55 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.cpioam
Oct  9 09:36:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct  9 09:36:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:36:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:36:55 compute-0 ceph-mgr[4772]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  9 09:36:55 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  9 09:36:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct  9 09:36:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:36:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:36:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:56.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v7: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 330 B/s wr, 12 op/s
Oct  9 09:36:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:56 compute-0 ceph-mon[4497]: Creating key for client.nfs.cephfs.1.0.compute-2.cpioam
Oct  9 09:36:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:36:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:36:56 compute-0 ceph-mon[4497]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  9 09:36:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:36:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:36:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct  9 09:36:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct  9 09:36:56 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct  9 09:36:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:57.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct  9 09:36:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct  9 09:36:58 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct  9 09:36:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:58.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v10: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 283 B/s wr, 10 op/s
Oct  9 09:36:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct  9 09:36:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:36:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:36:59 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct  9 09:36:59 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct  9 09:36:59 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.cpioam-rgw
Oct  9 09:36:59 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.cpioam-rgw
Oct  9 09:36:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 09:36:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:59 compute-0 ceph-mgr[4772]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.cpioam's ganesha conf is defaulting to empty
Oct  9 09:36:59 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.cpioam's ganesha conf is defaulting to empty
Oct  9 09:36:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:36:59 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:36:59 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.cpioam on compute-2
Oct  9 09:36:59 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.cpioam on compute-2
Oct  9 09:36:59 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 13 completed events
Oct  9 09:36:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:36:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:36:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:59.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:00 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:37:00 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:37:00 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:37:00 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:37:00 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:37:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:37:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:37:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:00 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy
Oct  9 09:37:00 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy
Oct  9 09:37:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Oct  9 09:37:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:37:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:37:00 compute-0 ceph-mgr[4772]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  9 09:37:00 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  9 09:37:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Oct  9 09:37:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:37:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:37:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:00 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:00.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v11: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Oct  9 09:37:01 compute-0 ceph-mon[4497]: Rados config object exists: conf-nfs.cephfs
Oct  9 09:37:01 compute-0 ceph-mon[4497]: Creating key for client.nfs.cephfs.1.0.compute-2.cpioam-rgw
Oct  9 09:37:01 compute-0 ceph-mon[4497]: Bind address in nfs.cephfs.1.0.compute-2.cpioam's ganesha conf is defaulting to empty
Oct  9 09:37:01 compute-0 ceph-mon[4497]: Deploying daemon nfs.cephfs.1.0.compute-2.cpioam on compute-2
Oct  9 09:37:01 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:01 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:01 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:01 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:37:01 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:37:01 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:37:01 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:37:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:01.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:02 compute-0 ceph-mon[4497]: Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy
Oct  9 09:37:02 compute-0 ceph-mon[4497]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  9 09:37:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:02] "GET /metrics HTTP/1.1" 200 46560 "" "Prometheus/2.51.0"
Oct  9 09:37:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:02] "GET /metrics HTTP/1.1" 200 46560 "" "Prometheus/2.51.0"
Oct  9 09:37:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:02.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v12: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.1 KiB/s wr, 12 op/s
Oct  9 09:37:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Oct  9 09:37:03 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:37:03 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:37:03 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Oct  9 09:37:03 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Oct  9 09:37:03 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw
Oct  9 09:37:03 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw
Oct  9 09:37:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Oct  9 09:37:03 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:37:03 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:37:03 compute-0 ceph-mgr[4772]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.rlqbpy's ganesha conf is defaulting to empty
Oct  9 09:37:03 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.rlqbpy's ganesha conf is defaulting to empty
Oct  9 09:37:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:03 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:03 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.rlqbpy on compute-0
Oct  9 09:37:03 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.rlqbpy on compute-0
Oct  9 09:37:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:03.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:03 compute-0 podman[28791]: 2025-10-09 09:37:03.988987011 +0000 UTC m=+0.027421227 container create 4aecb15b7527a6c05d5ac5f43894df6fe29ee34219cbb16df0f984e374639e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_roentgen, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:04 compute-0 systemd[1]: Started libpod-conmon-4aecb15b7527a6c05d5ac5f43894df6fe29ee34219cbb16df0f984e374639e5a.scope.
Oct  9 09:37:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:04 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:37:04 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:37:04 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:37:04 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:37:04 compute-0 podman[28791]: 2025-10-09 09:37:04.0488876 +0000 UTC m=+0.087321825 container init 4aecb15b7527a6c05d5ac5f43894df6fe29ee34219cbb16df0f984e374639e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:37:04 compute-0 podman[28791]: 2025-10-09 09:37:04.05420236 +0000 UTC m=+0.092636575 container start 4aecb15b7527a6c05d5ac5f43894df6fe29ee34219cbb16df0f984e374639e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:04 compute-0 podman[28791]: 2025-10-09 09:37:04.055402282 +0000 UTC m=+0.093836496 container attach 4aecb15b7527a6c05d5ac5f43894df6fe29ee34219cbb16df0f984e374639e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:04 compute-0 frosty_roentgen[28806]: 167 167
Oct  9 09:37:04 compute-0 systemd[1]: libpod-4aecb15b7527a6c05d5ac5f43894df6fe29ee34219cbb16df0f984e374639e5a.scope: Deactivated successfully.
Oct  9 09:37:04 compute-0 conmon[28806]: conmon 4aecb15b7527a6c05d5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4aecb15b7527a6c05d5ac5f43894df6fe29ee34219cbb16df0f984e374639e5a.scope/container/memory.events
Oct  9 09:37:04 compute-0 podman[28791]: 2025-10-09 09:37:04.057860626 +0000 UTC m=+0.096294851 container died 4aecb15b7527a6c05d5ac5f43894df6fe29ee34219cbb16df0f984e374639e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_roentgen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd08424a7261b32c5214d373ff5f0bd8ec9d1f780f23a38780e750338d6c3985-merged.mount: Deactivated successfully.
Oct  9 09:37:04 compute-0 podman[28791]: 2025-10-09 09:37:03.977345485 +0000 UTC m=+0.015779720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:04 compute-0 podman[28791]: 2025-10-09 09:37:04.077110417 +0000 UTC m=+0.115544633 container remove 4aecb15b7527a6c05d5ac5f43894df6fe29ee34219cbb16df0f984e374639e5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:04 compute-0 systemd[1]: libpod-conmon-4aecb15b7527a6c05d5ac5f43894df6fe29ee34219cbb16df0f984e374639e5a.scope: Deactivated successfully.
Oct  9 09:37:04 compute-0 systemd[1]: Reloading.
Oct  9 09:37:04 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:04 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:04 compute-0 systemd[1]: Reloading.
Oct  9 09:37:04 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:04 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct  9 09:37:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:37:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:37:04 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rlqbpy for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:04.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v13: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 895 B/s wr, 2 op/s
Oct  9 09:37:04 compute-0 podman[28938]: 2025-10-09 09:37:04.721228872 +0000 UTC m=+0.026654733 container create ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 09:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a390089c062eac4a79ed20d731673608a1e61a7af94d0780df1df839318b8ed8/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a390089c062eac4a79ed20d731673608a1e61a7af94d0780df1df839318b8ed8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a390089c062eac4a79ed20d731673608a1e61a7af94d0780df1df839318b8ed8/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a390089c062eac4a79ed20d731673608a1e61a7af94d0780df1df839318b8ed8/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rlqbpy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:04 compute-0 podman[28938]: 2025-10-09 09:37:04.76001822 +0000 UTC m=+0.065444091 container init ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:37:04 compute-0 podman[28938]: 2025-10-09 09:37:04.767435814 +0000 UTC m=+0.072861674 container start ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:04 compute-0 bash[28938]: ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5
Oct  9 09:37:04 compute-0 podman[28938]: 2025-10-09 09:37:04.710485917 +0000 UTC m=+0.015911799 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:04 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rlqbpy for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:04 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:37:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:04 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:37:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:04 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:37:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:04 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:37:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:04 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:37:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:04 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:37:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:04 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:37:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:37:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:04 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 5b7bc462-2530-416c-8eaa-4cce25a967df (Updating nfs.cephfs deployment (+3 -> 3))
Oct  9 09:37:04 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 5b7bc462-2530-416c-8eaa-4cce25a967df (Updating nfs.cephfs deployment (+3 -> 3)) in 10 seconds
Oct  9 09:37:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:04 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:37:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:37:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:37:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:37:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:05 compute-0 ceph-mon[4497]: Rados config object exists: conf-nfs.cephfs
Oct  9 09:37:05 compute-0 ceph-mon[4497]: Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw
Oct  9 09:37:05 compute-0 ceph-mon[4497]: Bind address in nfs.cephfs.2.0.compute-0.rlqbpy's ganesha conf is defaulting to empty
Oct  9 09:37:05 compute-0 ceph-mon[4497]: Deploying daemon nfs.cephfs.2.0.compute-0.rlqbpy on compute-0
Oct  9 09:37:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:05 compute-0 podman[29073]: 2025-10-09 09:37:05.22034103 +0000 UTC m=+0.027737464 container create c942e5861591afe716a2277ed1be1234664377694571bdfda14db86fe6d41df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:37:05 compute-0 systemd[1]: Started libpod-conmon-c942e5861591afe716a2277ed1be1234664377694571bdfda14db86fe6d41df4.scope.
Oct  9 09:37:05 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:05 compute-0 podman[29073]: 2025-10-09 09:37:05.273417503 +0000 UTC m=+0.080813956 container init c942e5861591afe716a2277ed1be1234664377694571bdfda14db86fe6d41df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:37:05 compute-0 podman[29073]: 2025-10-09 09:37:05.277735985 +0000 UTC m=+0.085132417 container start c942e5861591afe716a2277ed1be1234664377694571bdfda14db86fe6d41df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:37:05 compute-0 podman[29073]: 2025-10-09 09:37:05.278880522 +0000 UTC m=+0.086276956 container attach c942e5861591afe716a2277ed1be1234664377694571bdfda14db86fe6d41df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:05 compute-0 hungry_stonebraker[29087]: 167 167
Oct  9 09:37:05 compute-0 systemd[1]: libpod-c942e5861591afe716a2277ed1be1234664377694571bdfda14db86fe6d41df4.scope: Deactivated successfully.
Oct  9 09:37:05 compute-0 podman[29073]: 2025-10-09 09:37:05.282295511 +0000 UTC m=+0.089691944 container died c942e5861591afe716a2277ed1be1234664377694571bdfda14db86fe6d41df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8688c0d995ddce1d475e0e5fd8996f77af3a19669feefa3212edf937e09b4a50-merged.mount: Deactivated successfully.
Oct  9 09:37:05 compute-0 podman[29073]: 2025-10-09 09:37:05.299387584 +0000 UTC m=+0.106784017 container remove c942e5861591afe716a2277ed1be1234664377694571bdfda14db86fe6d41df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:37:05 compute-0 podman[29073]: 2025-10-09 09:37:05.20908532 +0000 UTC m=+0.016481763 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:05 compute-0 systemd[1]: libpod-conmon-c942e5861591afe716a2277ed1be1234664377694571bdfda14db86fe6d41df4.scope: Deactivated successfully.
Oct  9 09:37:05 compute-0 podman[29108]: 2025-10-09 09:37:05.411894328 +0000 UTC m=+0.027530173 container create 75ae7f6919f9a4921f39772259df96ce6f4eddb8f67a760b1d592e697a63dcbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:37:05 compute-0 systemd[1]: Started libpod-conmon-75ae7f6919f9a4921f39772259df96ce6f4eddb8f67a760b1d592e697a63dcbf.scope.
Oct  9 09:37:05 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86a8662ecdbe60e5da767b44fac715e6f33d8bb0815b2336c1eee9a32bc7ba3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86a8662ecdbe60e5da767b44fac715e6f33d8bb0815b2336c1eee9a32bc7ba3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86a8662ecdbe60e5da767b44fac715e6f33d8bb0815b2336c1eee9a32bc7ba3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86a8662ecdbe60e5da767b44fac715e6f33d8bb0815b2336c1eee9a32bc7ba3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86a8662ecdbe60e5da767b44fac715e6f33d8bb0815b2336c1eee9a32bc7ba3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:05 compute-0 podman[29108]: 2025-10-09 09:37:05.462808025 +0000 UTC m=+0.078443879 container init 75ae7f6919f9a4921f39772259df96ce6f4eddb8f67a760b1d592e697a63dcbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:05 compute-0 podman[29108]: 2025-10-09 09:37:05.467902128 +0000 UTC m=+0.083537973 container start 75ae7f6919f9a4921f39772259df96ce6f4eddb8f67a760b1d592e697a63dcbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Oct  9 09:37:05 compute-0 podman[29108]: 2025-10-09 09:37:05.470903707 +0000 UTC m=+0.086539551 container attach 75ae7f6919f9a4921f39772259df96ce6f4eddb8f67a760b1d592e697a63dcbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Oct  9 09:37:05 compute-0 podman[29108]: 2025-10-09 09:37:05.400892808 +0000 UTC m=+0.016528682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:05 compute-0 condescending_hermann[29121]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:37:05 compute-0 condescending_hermann[29121]: --> All data devices are unavailable
Oct  9 09:37:05 compute-0 systemd[1]: libpod-75ae7f6919f9a4921f39772259df96ce6f4eddb8f67a760b1d592e697a63dcbf.scope: Deactivated successfully.
Oct  9 09:37:05 compute-0 podman[29108]: 2025-10-09 09:37:05.729509699 +0000 UTC m=+0.345145542 container died 75ae7f6919f9a4921f39772259df96ce6f4eddb8f67a760b1d592e697a63dcbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d86a8662ecdbe60e5da767b44fac715e6f33d8bb0815b2336c1eee9a32bc7ba3-merged.mount: Deactivated successfully.
Oct  9 09:37:05 compute-0 podman[29108]: 2025-10-09 09:37:05.755806775 +0000 UTC m=+0.371442619 container remove 75ae7f6919f9a4921f39772259df96ce6f4eddb8f67a760b1d592e697a63dcbf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hermann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 09:37:05 compute-0 systemd[1]: libpod-conmon-75ae7f6919f9a4921f39772259df96ce6f4eddb8f67a760b1d592e697a63dcbf.scope: Deactivated successfully.
Oct  9 09:37:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:05.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.042387) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626042503, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6062, "num_deletes": 254, "total_data_size": 13750186, "memory_usage": 14664432, "flush_reason": "Manual Compaction"}
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626072131, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12295259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 122, "largest_seqno": 6179, "table_properties": {"data_size": 12273914, "index_size": 13487, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6789, "raw_key_size": 64612, "raw_average_key_size": 23, "raw_value_size": 12221733, "raw_average_value_size": 4538, "num_data_blocks": 597, "num_entries": 2693, "num_filter_entries": 2693, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002420, "oldest_key_time": 1760002420, "file_creation_time": 1760002626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 29924 microseconds, and 23680 cpu microseconds.
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.072318) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12295259 bytes OK
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.072333) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.072776) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.072870) EVENT_LOG_v1 {"time_micros": 1760002626072867, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.072882) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13723182, prev total WAL file size 13724456, number of live WAL files 2.
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.076301) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(45KB) 8(1944B)]
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626076376, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12343911, "oldest_snapshot_seqno": -1}
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2496 keys, 12325910 bytes, temperature: kUnknown
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626097941, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12325910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12304992, "index_size": 13577, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6277, "raw_key_size": 63071, "raw_average_key_size": 25, "raw_value_size": 12254657, "raw_average_value_size": 4909, "num_data_blocks": 602, "num_entries": 2496, "num_filter_entries": 2496, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760002626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.098318) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12325910 bytes
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.099054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 565.9 rd, 565.1 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.8, 0.0 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2786, records dropped: 290 output_compression: NoCompression
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.099072) EVENT_LOG_v1 {"time_micros": 1760002626099064, "job": 4, "event": "compaction_finished", "compaction_time_micros": 21813, "compaction_time_cpu_micros": 14750, "output_level": 6, "num_output_files": 1, "total_output_size": 12325910, "num_input_records": 2786, "num_output_records": 2496, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626100599, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626100753, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626100881, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct  9 09:37:06 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:06.076256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:06 compute-0 podman[29227]: 2025-10-09 09:37:06.174020098 +0000 UTC m=+0.027756119 container create 71579029d3607a681f21e5f1612791ce34aa7f34481469fbb261f0afe2cc3756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 09:37:06 compute-0 systemd[1]: Started libpod-conmon-71579029d3607a681f21e5f1612791ce34aa7f34481469fbb261f0afe2cc3756.scope.
Oct  9 09:37:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:06 compute-0 podman[29227]: 2025-10-09 09:37:06.232619083 +0000 UTC m=+0.086355114 container init 71579029d3607a681f21e5f1612791ce34aa7f34481469fbb261f0afe2cc3756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:06 compute-0 podman[29227]: 2025-10-09 09:37:06.236923006 +0000 UTC m=+0.090659018 container start 71579029d3607a681f21e5f1612791ce34aa7f34481469fbb261f0afe2cc3756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 09:37:06 compute-0 podman[29227]: 2025-10-09 09:37:06.237984938 +0000 UTC m=+0.091720950 container attach 71579029d3607a681f21e5f1612791ce34aa7f34481469fbb261f0afe2cc3756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:06 compute-0 tender_brattain[29240]: 167 167
Oct  9 09:37:06 compute-0 systemd[1]: libpod-71579029d3607a681f21e5f1612791ce34aa7f34481469fbb261f0afe2cc3756.scope: Deactivated successfully.
Oct  9 09:37:06 compute-0 podman[29227]: 2025-10-09 09:37:06.240513456 +0000 UTC m=+0.094249467 container died 71579029d3607a681f21e5f1612791ce34aa7f34481469fbb261f0afe2cc3756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6cf1a8da18fb5ae792df81782eee476d8bd9a5f104d3d32843fb36ad98688a3-merged.mount: Deactivated successfully.
Oct  9 09:37:06 compute-0 podman[29227]: 2025-10-09 09:37:06.257085178 +0000 UTC m=+0.110821189 container remove 71579029d3607a681f21e5f1612791ce34aa7f34481469fbb261f0afe2cc3756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:06 compute-0 podman[29227]: 2025-10-09 09:37:06.16372023 +0000 UTC m=+0.017456261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:06 compute-0 systemd[1]: libpod-conmon-71579029d3607a681f21e5f1612791ce34aa7f34481469fbb261f0afe2cc3756.scope: Deactivated successfully.
Oct  9 09:37:06 compute-0 podman[29261]: 2025-10-09 09:37:06.370618118 +0000 UTC m=+0.028800829 container create dffe36ae36f991b58531b087031da17fa19d6ed3f3d8c7319b62b17ec74cdef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 09:37:06 compute-0 systemd[1]: Started libpod-conmon-dffe36ae36f991b58531b087031da17fa19d6ed3f3d8c7319b62b17ec74cdef0.scope.
Oct  9 09:37:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3a7c050292be198c8a4c8cb61fa9c3602b95c2ad99ece89012239accbbcab7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3a7c050292be198c8a4c8cb61fa9c3602b95c2ad99ece89012239accbbcab7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3a7c050292be198c8a4c8cb61fa9c3602b95c2ad99ece89012239accbbcab7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3a7c050292be198c8a4c8cb61fa9c3602b95c2ad99ece89012239accbbcab7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:06 compute-0 podman[29261]: 2025-10-09 09:37:06.420903429 +0000 UTC m=+0.079086160 container init dffe36ae36f991b58531b087031da17fa19d6ed3f3d8c7319b62b17ec74cdef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:06 compute-0 podman[29261]: 2025-10-09 09:37:06.426266149 +0000 UTC m=+0.084448859 container start dffe36ae36f991b58531b087031da17fa19d6ed3f3d8c7319b62b17ec74cdef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_knuth, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 09:37:06 compute-0 podman[29261]: 2025-10-09 09:37:06.427588552 +0000 UTC m=+0.085771284 container attach dffe36ae36f991b58531b087031da17fa19d6ed3f3d8c7319b62b17ec74cdef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_knuth, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:06 compute-0 podman[29261]: 2025-10-09 09:37:06.359515827 +0000 UTC m=+0.017698549 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:37:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:37:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:06.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:37:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:37:06 compute-0 jovial_knuth[29274]: {
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:    "1": [
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:        {
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "devices": [
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "/dev/loop3"
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            ],
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "lv_name": "ceph_lv0",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "lv_size": "21470642176",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "name": "ceph_lv0",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "tags": {
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.cluster_name": "ceph",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.crush_device_class": "",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.encrypted": "0",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.osd_id": "1",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.type": "block",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.vdo": "0",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:                "ceph.with_tpm": "0"
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            },
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "type": "block",
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:            "vg_name": "ceph_vg0"
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:        }
Oct  9 09:37:06 compute-0 jovial_knuth[29274]:    ]
Oct  9 09:37:06 compute-0 jovial_knuth[29274]: }
Oct  9 09:37:06 compute-0 python3[29302]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:37:06 compute-0 systemd[1]: libpod-dffe36ae36f991b58531b087031da17fa19d6ed3f3d8c7319b62b17ec74cdef0.scope: Deactivated successfully.
Oct  9 09:37:06 compute-0 conmon[29274]: conmon dffe36ae36f991b58531 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dffe36ae36f991b58531b087031da17fa19d6ed3f3d8c7319b62b17ec74cdef0.scope/container/memory.events
Oct  9 09:37:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v14: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.9 KiB/s wr, 5 op/s
Oct  9 09:37:06 compute-0 podman[29321]: 2025-10-09 09:37:06.708616083 +0000 UTC m=+0.017424861 container died dffe36ae36f991b58531b087031da17fa19d6ed3f3d8c7319b62b17ec74cdef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:37:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a3a7c050292be198c8a4c8cb61fa9c3602b95c2ad99ece89012239accbbcab7-merged.mount: Deactivated successfully.
Oct  9 09:37:06 compute-0 podman[29321]: 2025-10-09 09:37:06.729623067 +0000 UTC m=+0.038431825 container remove dffe36ae36f991b58531b087031da17fa19d6ed3f3d8c7319b62b17ec74cdef0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_knuth, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:06 compute-0 systemd[1]: libpod-conmon-dffe36ae36f991b58531b087031da17fa19d6ed3f3d8c7319b62b17ec74cdef0.scope: Deactivated successfully.
Oct  9 09:37:06 compute-0 ceph-mgr[4772]: [dashboard INFO request] [192.168.122.100:56314] [GET] [200] [0.100s] [6.3K] [7f5c9b78-1b7d-4092-9301-e22432ef5e36] /
Oct  9 09:37:07 compute-0 python3[29406]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:37:07 compute-0 ceph-mgr[4772]: [dashboard INFO request] [192.168.122.100:56324] [GET] [200] [0.001s] [6.3K] [1cfa4ad3-94fb-47b9-b58c-3d3f29a1b41b] /
Oct  9 09:37:07 compute-0 podman[29438]: 2025-10-09 09:37:07.191178101 +0000 UTC m=+0.028013293 container create f236fde1a2e3ca943c19c1d654f3dc55384bf9a9abe3ae73451c61d5a9136e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:07 compute-0 systemd[1]: Started libpod-conmon-f236fde1a2e3ca943c19c1d654f3dc55384bf9a9abe3ae73451c61d5a9136e4d.scope.
Oct  9 09:37:07 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:07 compute-0 podman[29438]: 2025-10-09 09:37:07.245403149 +0000 UTC m=+0.082238352 container init f236fde1a2e3ca943c19c1d654f3dc55384bf9a9abe3ae73451c61d5a9136e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:37:07 compute-0 podman[29438]: 2025-10-09 09:37:07.250602431 +0000 UTC m=+0.087437613 container start f236fde1a2e3ca943c19c1d654f3dc55384bf9a9abe3ae73451c61d5a9136e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:37:07 compute-0 podman[29438]: 2025-10-09 09:37:07.252150951 +0000 UTC m=+0.088986153 container attach f236fde1a2e3ca943c19c1d654f3dc55384bf9a9abe3ae73451c61d5a9136e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:37:07 compute-0 gracious_williams[29451]: 167 167
Oct  9 09:37:07 compute-0 podman[29438]: 2025-10-09 09:37:07.254898982 +0000 UTC m=+0.091734163 container died f236fde1a2e3ca943c19c1d654f3dc55384bf9a9abe3ae73451c61d5a9136e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:07 compute-0 systemd[1]: libpod-f236fde1a2e3ca943c19c1d654f3dc55384bf9a9abe3ae73451c61d5a9136e4d.scope: Deactivated successfully.
Oct  9 09:37:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f5555098ef8f6c7eb8f7550c459e618e1ce13e58a19e1ed57e68ea2ba56e8e8-merged.mount: Deactivated successfully.
Oct  9 09:37:07 compute-0 podman[29438]: 2025-10-09 09:37:07.179960954 +0000 UTC m=+0.016796156 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:07 compute-0 podman[29438]: 2025-10-09 09:37:07.295707116 +0000 UTC m=+0.132542297 container remove f236fde1a2e3ca943c19c1d654f3dc55384bf9a9abe3ae73451c61d5a9136e4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:37:07 compute-0 systemd[1]: libpod-conmon-f236fde1a2e3ca943c19c1d654f3dc55384bf9a9abe3ae73451c61d5a9136e4d.scope: Deactivated successfully.
Oct  9 09:37:07 compute-0 podman[29474]: 2025-10-09 09:37:07.407041661 +0000 UTC m=+0.027017236 container create 021f2c8243be37e92688bb77a85847f48733adca3357488acbd2e86082358314 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_clarke, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:07 compute-0 systemd[1]: Started libpod-conmon-021f2c8243be37e92688bb77a85847f48733adca3357488acbd2e86082358314.scope.
Oct  9 09:37:07 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/790fc59b42ceaf29bc6976d8f15d640003f823361470e2eb6dbbf5799b2f2d23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/790fc59b42ceaf29bc6976d8f15d640003f823361470e2eb6dbbf5799b2f2d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/790fc59b42ceaf29bc6976d8f15d640003f823361470e2eb6dbbf5799b2f2d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/790fc59b42ceaf29bc6976d8f15d640003f823361470e2eb6dbbf5799b2f2d23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:07 compute-0 podman[29474]: 2025-10-09 09:37:07.469651187 +0000 UTC m=+0.089626782 container init 021f2c8243be37e92688bb77a85847f48733adca3357488acbd2e86082358314 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 09:37:07 compute-0 podman[29474]: 2025-10-09 09:37:07.474119219 +0000 UTC m=+0.094094795 container start 021f2c8243be37e92688bb77a85847f48733adca3357488acbd2e86082358314 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:07 compute-0 podman[29474]: 2025-10-09 09:37:07.475438447 +0000 UTC m=+0.095414032 container attach 021f2c8243be37e92688bb77a85847f48733adca3357488acbd2e86082358314 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 09:37:07 compute-0 podman[29474]: 2025-10-09 09:37:07.396512671 +0000 UTC m=+0.016488266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:07.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:07 compute-0 lvm[29564]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:37:07 compute-0 lvm[29564]: VG ceph_vg0 finished
Oct  9 09:37:07 compute-0 quirky_clarke[29487]: {}
Oct  9 09:37:07 compute-0 systemd[1]: libpod-021f2c8243be37e92688bb77a85847f48733adca3357488acbd2e86082358314.scope: Deactivated successfully.
Oct  9 09:37:07 compute-0 podman[29474]: 2025-10-09 09:37:07.975618519 +0000 UTC m=+0.595594094 container died 021f2c8243be37e92688bb77a85847f48733adca3357488acbd2e86082358314 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:37:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-790fc59b42ceaf29bc6976d8f15d640003f823361470e2eb6dbbf5799b2f2d23-merged.mount: Deactivated successfully.
Oct  9 09:37:07 compute-0 podman[29474]: 2025-10-09 09:37:07.998846981 +0000 UTC m=+0.618822556 container remove 021f2c8243be37e92688bb77a85847f48733adca3357488acbd2e86082358314 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_clarke, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:08 compute-0 systemd[1]: libpod-conmon-021f2c8243be37e92688bb77a85847f48733adca3357488acbd2e86082358314.scope: Deactivated successfully.
Oct  9 09:37:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Oct  9 09:37:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:08.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:08 compute-0 podman[29734]: 2025-10-09 09:37:08.664292532 +0000 UTC m=+0.036907181 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v15: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Oct  9 09:37:08 compute-0 podman[29734]: 2025-10-09 09:37:08.748347819 +0000 UTC m=+0.120962458 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:37:09 compute-0 podman[29829]: 2025-10-09 09:37:09.050363859 +0000 UTC m=+0.034549586 container exec f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:09 compute-0 podman[29829]: 2025-10-09 09:37:09.060383198 +0000 UTC m=+0.044568935 container exec_died f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:37:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:37:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:09 compute-0 podman[29913]: 2025-10-09 09:37:09.31106498 +0000 UTC m=+0.037313216 container exec bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:09 compute-0 podman[29913]: 2025-10-09 09:37:09.331341937 +0000 UTC m=+0.057590163 container exec_died bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:09 compute-0 podman[29971]: 2025-10-09 09:37:09.475408591 +0000 UTC m=+0.034658772 container exec 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:09 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 14 completed events
Oct  9 09:37:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:37:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:09 compute-0 podman[29971]: 2025-10-09 09:37:09.596043559 +0000 UTC m=+0.155293731 container exec_died 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:09 compute-0 podman[30029]: 2025-10-09 09:37:09.734503002 +0000 UTC m=+0.034525760 container exec 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:37:09 compute-0 podman[30029]: 2025-10-09 09:37:09.741288966 +0000 UTC m=+0.041311713 container exec_died 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:37:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:09 compute-0 podman[30082]: 2025-10-09 09:37:09.871184774 +0000 UTC m=+0.032311546 container exec 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, distribution-scope=public, description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, version=2.2.4)
Oct  9 09:37:09 compute-0 podman[30082]: 2025-10-09 09:37:09.879435459 +0000 UTC m=+0.040562231 container exec_died 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64)
Oct  9 09:37:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:37:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:37:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-0 podman[30134]: 2025-10-09 09:37:10.027281958 +0000 UTC m=+0.037189753 container exec ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:10 compute-0 podman[30134]: 2025-10-09 09:37:10.055383427 +0000 UTC m=+0.065291222 container exec_died ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:10 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-0 podman[30183]: 2025-10-09 09:37:10.161556963 +0000 UTC m=+0.032545900 container exec ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:10 compute-0 podman[30200]: 2025-10-09 09:37:10.220218565 +0000 UTC m=+0.044855455 container exec_died ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:10 compute-0 podman[30183]: 2025-10-09 09:37:10.222521267 +0000 UTC m=+0.093510184 container exec_died ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:37:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:37:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v16: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Oct  9 09:37:10 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 4ee86be2-721a-4251-bac2-6904889160e6 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct  9 09:37:10 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.oqhtjo on compute-1
Oct  9 09:37:10 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.oqhtjo on compute-1
Oct  9 09:37:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:10.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:11 compute-0 ceph-mon[4497]: Deploying daemon haproxy.nfs.cephfs.compute-1.oqhtjo on compute-1
Oct  9 09:37:11 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Oct  9 09:37:11 compute-0 ceph-mon[4497]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  9 09:37:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:11.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:12 compute-0 ceph-mon[4497]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Oct  9 09:37:12 compute-0 ceph-mon[4497]: Cluster is now healthy
Oct  9 09:37:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v17: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.9 KiB/s wr, 7 op/s
Oct  9 09:37:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:12] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Oct  9 09:37:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:12] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Oct  9 09:37:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:12.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:37:13 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:37:13 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 09:37:13 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:13 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.ujrhwc on compute-0
Oct  9 09:37:13 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.ujrhwc on compute-0
Oct  9 09:37:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:13.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:13 compute-0 podman[30294]: 2025-10-09 09:37:13.974460887 +0000 UTC m=+0.028601111 container create 744fbac80cdfff85301d26c8d3bde526a0740479af54c0abbe8c590714478930 (image=quay.io/ceph/haproxy:2.3, name=nice_galileo)
Oct  9 09:37:13 compute-0 systemd[1]: Started libpod-conmon-744fbac80cdfff85301d26c8d3bde526a0740479af54c0abbe8c590714478930.scope.
Oct  9 09:37:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:14 compute-0 podman[30294]: 2025-10-09 09:37:14.027590902 +0000 UTC m=+0.081731115 container init 744fbac80cdfff85301d26c8d3bde526a0740479af54c0abbe8c590714478930 (image=quay.io/ceph/haproxy:2.3, name=nice_galileo)
Oct  9 09:37:14 compute-0 podman[30294]: 2025-10-09 09:37:14.032299679 +0000 UTC m=+0.086439893 container start 744fbac80cdfff85301d26c8d3bde526a0740479af54c0abbe8c590714478930 (image=quay.io/ceph/haproxy:2.3, name=nice_galileo)
Oct  9 09:37:14 compute-0 podman[30294]: 2025-10-09 09:37:14.033455769 +0000 UTC m=+0.087596002 container attach 744fbac80cdfff85301d26c8d3bde526a0740479af54c0abbe8c590714478930 (image=quay.io/ceph/haproxy:2.3, name=nice_galileo)
Oct  9 09:37:14 compute-0 nice_galileo[30308]: 0 0
Oct  9 09:37:14 compute-0 systemd[1]: libpod-744fbac80cdfff85301d26c8d3bde526a0740479af54c0abbe8c590714478930.scope: Deactivated successfully.
Oct  9 09:37:14 compute-0 conmon[30308]: conmon 744fbac80cdfff85301d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-744fbac80cdfff85301d26c8d3bde526a0740479af54c0abbe8c590714478930.scope/container/memory.events
Oct  9 09:37:14 compute-0 podman[30294]: 2025-10-09 09:37:14.036513423 +0000 UTC m=+0.090653637 container died 744fbac80cdfff85301d26c8d3bde526a0740479af54c0abbe8c590714478930 (image=quay.io/ceph/haproxy:2.3, name=nice_galileo)
Oct  9 09:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1ac0a590cc0409d82778f95a3d6c9a49413345f8b5fe79a86875d93d1116123-merged.mount: Deactivated successfully.
Oct  9 09:37:14 compute-0 podman[30294]: 2025-10-09 09:37:14.054582648 +0000 UTC m=+0.108722862 container remove 744fbac80cdfff85301d26c8d3bde526a0740479af54c0abbe8c590714478930 (image=quay.io/ceph/haproxy:2.3, name=nice_galileo)
Oct  9 09:37:14 compute-0 podman[30294]: 2025-10-09 09:37:13.962628008 +0000 UTC m=+0.016768242 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 09:37:14 compute-0 systemd[1]: libpod-conmon-744fbac80cdfff85301d26c8d3bde526a0740479af54c0abbe8c590714478930.scope: Deactivated successfully.
Oct  9 09:37:14 compute-0 systemd[1]: Reloading.
Oct  9 09:37:14 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:14 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v18: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.9 KiB/s wr, 7 op/s
Oct  9 09:37:14 compute-0 systemd[1]: Reloading.
Oct  9 09:37:14 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:14 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:14 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.ujrhwc for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:14 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:14.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:14 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-0 ceph-mon[4497]: Deploying daemon haproxy.nfs.cephfs.compute-0.ujrhwc on compute-0
Oct  9 09:37:14 compute-0 podman[30443]: 2025-10-09 09:37:14.694794417 +0000 UTC m=+0.027806995 container create 7000763ef5790790fc25ab12bf5fb3305593cecba348b112027987ea956fdad8 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc)
Oct  9 09:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7c19110ad099a41502913c420e0600b16e9c40f3cdaf0386de4b70acb944ee/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:14 compute-0 podman[30443]: 2025-10-09 09:37:14.735311452 +0000 UTC m=+0.068324040 container init 7000763ef5790790fc25ab12bf5fb3305593cecba348b112027987ea956fdad8 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc)
Oct  9 09:37:14 compute-0 podman[30443]: 2025-10-09 09:37:14.739547738 +0000 UTC m=+0.072560326 container start 7000763ef5790790fc25ab12bf5fb3305593cecba348b112027987ea956fdad8 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc)
Oct  9 09:37:14 compute-0 bash[30443]: 7000763ef5790790fc25ab12bf5fb3305593cecba348b112027987ea956fdad8
Oct  9 09:37:14 compute-0 podman[30443]: 2025-10-09 09:37:14.683348949 +0000 UTC m=+0.016361547 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 09:37:14 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.ujrhwc for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:14 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [NOTICE] 281/093714 (2) : New worker #1 (4) forked
Oct  9 09:37:14 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [WARNING] 281/093714 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:37:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 09:37:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.iyubhq on compute-2
Oct  9 09:37:14 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.iyubhq on compute-2
Oct  9 09:37:14 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:14 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b8000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:15 compute-0 ceph-mon[4497]: Deploying daemon haproxy.nfs.cephfs.compute-2.iyubhq on compute-2
Oct  9 09:37:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:15.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:16 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac0020a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:37:16 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:37:16 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 09:37:16 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Oct  9 09:37:16 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:16 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:16 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:16 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:16 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:16 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:16 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:16 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.dgxvnq on compute-2
Oct  9 09:37:16 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.dgxvnq on compute-2
Oct  9 09:37:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v19: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.9 KiB/s wr, 7 op/s
Oct  9 09:37:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:16.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:16 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac0020a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:17 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:17 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:17 compute-0 ceph-mon[4497]: Deploying daemon keepalived.nfs.cephfs.compute-2.dgxvnq on compute-2
Oct  9 09:37:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:17 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b8001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:37:17 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:37:17 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 09:37:17 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:17 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:17 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:17 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:17 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:17 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:17 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.zabdum on compute-1
Oct  9 09:37:17 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.zabdum on compute-1
Oct  9 09:37:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:17.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:18 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v20: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 973 B/s wr, 4 op/s
Oct  9 09:37:18 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:18 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:18 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:18 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:18 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:18 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:18 compute-0 ceph-mon[4497]: Deploying daemon keepalived.nfs.cephfs.compute-1.zabdum on compute-1
Oct  9 09:37:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:18.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:18 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac003220 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:19 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac003220 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct  9 09:37:19 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:37:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:37:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:37:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:37:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:37:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:37:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:37:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:37:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=infra.usagestats t=2025-10-09T09:37:19.807988003Z level=info msg="Usage stats are ready to report"
Oct  9 09:37:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:19.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:20 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac003220 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v21: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 973 B/s wr, 4 op/s
Oct  9 09:37:20 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:20.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:20 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b40025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:37:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:37:21 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:37:21 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 09:37:21 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:21 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:21 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:21 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:21 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:21 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:21 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.qjivil on compute-0
Oct  9 09:37:21 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.qjivil on compute-0
Oct  9 09:37:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:21 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac0045c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:21 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-0 podman[30552]: 2025-10-09 09:37:21.719466798 +0000 UTC m=+0.025450613 container create 8e648be67de8a40b83b2548e3b894898d54c8645c55e5323c53e61f14ae0dd27 (image=quay.io/ceph/keepalived:2.2.4, name=practical_lamport, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vcs-type=git, description=keepalived for Ceph, name=keepalived, version=2.2.4, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container)
Oct  9 09:37:21 compute-0 systemd[1]: Started libpod-conmon-8e648be67de8a40b83b2548e3b894898d54c8645c55e5323c53e61f14ae0dd27.scope.
Oct  9 09:37:21 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:21 compute-0 podman[30552]: 2025-10-09 09:37:21.771780643 +0000 UTC m=+0.077764458 container init 8e648be67de8a40b83b2548e3b894898d54c8645c55e5323c53e61f14ae0dd27 (image=quay.io/ceph/keepalived:2.2.4, name=practical_lamport, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph)
Oct  9 09:37:21 compute-0 podman[30552]: 2025-10-09 09:37:21.776428305 +0000 UTC m=+0.082412121 container start 8e648be67de8a40b83b2548e3b894898d54c8645c55e5323c53e61f14ae0dd27 (image=quay.io/ceph/keepalived:2.2.4, name=practical_lamport, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64)
Oct  9 09:37:21 compute-0 podman[30552]: 2025-10-09 09:37:21.777844325 +0000 UTC m=+0.083828159 container attach 8e648be67de8a40b83b2548e3b894898d54c8645c55e5323c53e61f14ae0dd27 (image=quay.io/ceph/keepalived:2.2.4, name=practical_lamport, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=keepalived, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=Ceph keepalived)
Oct  9 09:37:21 compute-0 practical_lamport[30565]: 0 0
Oct  9 09:37:21 compute-0 systemd[1]: libpod-8e648be67de8a40b83b2548e3b894898d54c8645c55e5323c53e61f14ae0dd27.scope: Deactivated successfully.
Oct  9 09:37:21 compute-0 podman[30552]: 2025-10-09 09:37:21.780352493 +0000 UTC m=+0.086336308 container died 8e648be67de8a40b83b2548e3b894898d54c8645c55e5323c53e61f14ae0dd27 (image=quay.io/ceph/keepalived:2.2.4, name=practical_lamport, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.buildah.version=1.28.2, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, build-date=2023-02-22T09:23:20, release=1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct  9 09:37:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-41a36613452240a20b63a0e23ccb0763bebe51cf70f74d133628e6e211d7566a-merged.mount: Deactivated successfully.
Oct  9 09:37:21 compute-0 podman[30552]: 2025-10-09 09:37:21.798174212 +0000 UTC m=+0.104158027 container remove 8e648be67de8a40b83b2548e3b894898d54c8645c55e5323c53e61f14ae0dd27 (image=quay.io/ceph/keepalived:2.2.4, name=practical_lamport, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, release=1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, architecture=x86_64, name=keepalived, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.expose-services=)
Oct  9 09:37:21 compute-0 podman[30552]: 2025-10-09 09:37:21.708750133 +0000 UTC m=+0.014733968 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 09:37:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:21.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:21 compute-0 systemd[1]: libpod-conmon-8e648be67de8a40b83b2548e3b894898d54c8645c55e5323c53e61f14ae0dd27.scope: Deactivated successfully.
Oct  9 09:37:21 compute-0 systemd[1]: Reloading.
Oct  9 09:37:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:22 compute-0 systemd[1]: Reloading.
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:22 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac0045c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:22 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:22 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:37:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:22 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.qjivil for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v22: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 938 B/s wr, 4 op/s
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:22] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Oct  9 09:37:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:22] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Oct  9 09:37:22 compute-0 podman[30701]: 2025-10-09 09:37:22.401814874 +0000 UTC m=+0.026484310 container create bdad08d3892aadc40e4a8fa45df80eb2003152c48fd9e9cf89eda1157025751c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vcs-type=git, version=2.2.4, name=keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, build-date=2023-02-22T09:23:20)
Oct  9 09:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a44160bd2ba70bc5ad65d6f7af4d4ed87227c444584e6323db548b101e1fd60/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:22 compute-0 podman[30701]: 2025-10-09 09:37:22.433100597 +0000 UTC m=+0.057770022 container init bdad08d3892aadc40e4a8fa45df80eb2003152c48fd9e9cf89eda1157025751c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1793, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph)
Oct  9 09:37:22 compute-0 podman[30701]: 2025-10-09 09:37:22.436933743 +0000 UTC m=+0.061603169 container start bdad08d3892aadc40e4a8fa45df80eb2003152c48fd9e9cf89eda1157025751c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, name=keepalived, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Oct  9 09:37:22 compute-0 bash[30701]: bdad08d3892aadc40e4a8fa45df80eb2003152c48fd9e9cf89eda1157025751c
Oct  9 09:37:22 compute-0 podman[30701]: 2025-10-09 09:37:22.390812492 +0000 UTC m=+0.015481938 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 09:37:22 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.qjivil for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:22 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:22 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:22 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:22 2025: Configuration file /etc/keepalived/keepalived.conf
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:22 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:22 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:22 2025: Starting VRRP child process, pid=4
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:22 2025: Startup complete
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:37:22 2025: (VI_0) Entering BACKUP STATE
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:22 2025: (VI_0) Entering BACKUP STATE (init)
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:22 2025: VRRP_Script(check_backend) succeeded
Oct  9 09:37:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 09:37:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 4ee86be2-721a-4251-bac2-6904889160e6 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Oct  9 09:37:22 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 4ee86be2-721a-4251-bac2-6904889160e6 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 12 seconds
Oct  9 09:37:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Oct  9 09:37:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:37:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:37:22 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:37:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:37:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:22 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:22 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:22 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:22 compute-0 ceph-mon[4497]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:22 compute-0 ceph-mon[4497]: Deploying daemon keepalived.nfs.cephfs.compute-0.qjivil on compute-0
Oct  9 09:37:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:22.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:22 compute-0 podman[30802]: 2025-10-09 09:37:22.878192933 +0000 UTC m=+0.026759350 container create 4519f23fb4786ff416854a5674294bf4e9fd33c21898fe695ff9b534f9d1d12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_euclid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 09:37:22 compute-0 systemd[1]: Started libpod-conmon-4519f23fb4786ff416854a5674294bf4e9fd33c21898fe695ff9b534f9d1d12f.scope.
Oct  9 09:37:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:22 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac0045c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:22 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:22 compute-0 podman[30802]: 2025-10-09 09:37:22.93058126 +0000 UTC m=+0.079147696 container init 4519f23fb4786ff416854a5674294bf4e9fd33c21898fe695ff9b534f9d1d12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:22 compute-0 podman[30802]: 2025-10-09 09:37:22.935284857 +0000 UTC m=+0.083851273 container start 4519f23fb4786ff416854a5674294bf4e9fd33c21898fe695ff9b534f9d1d12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_euclid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:22 compute-0 podman[30802]: 2025-10-09 09:37:22.937539287 +0000 UTC m=+0.086105703 container attach 4519f23fb4786ff416854a5674294bf4e9fd33c21898fe695ff9b534f9d1d12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_euclid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:22 compute-0 nifty_euclid[30815]: 167 167
Oct  9 09:37:22 compute-0 systemd[1]: libpod-4519f23fb4786ff416854a5674294bf4e9fd33c21898fe695ff9b534f9d1d12f.scope: Deactivated successfully.
Oct  9 09:37:22 compute-0 conmon[30815]: conmon 4519f23fb4786ff41685 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4519f23fb4786ff416854a5674294bf4e9fd33c21898fe695ff9b534f9d1d12f.scope/container/memory.events
Oct  9 09:37:22 compute-0 podman[30802]: 2025-10-09 09:37:22.939469025 +0000 UTC m=+0.088035441 container died 4519f23fb4786ff416854a5674294bf4e9fd33c21898fe695ff9b534f9d1d12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_euclid, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-79450188397a1d88b727a3e2cbea1465141b95297addac003a2534d80abf9970-merged.mount: Deactivated successfully.
Oct  9 09:37:22 compute-0 podman[30802]: 2025-10-09 09:37:22.957820452 +0000 UTC m=+0.106386869 container remove 4519f23fb4786ff416854a5674294bf4e9fd33c21898fe695ff9b534f9d1d12f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:22 compute-0 podman[30802]: 2025-10-09 09:37:22.866939047 +0000 UTC m=+0.015505484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:22 compute-0 systemd[1]: libpod-conmon-4519f23fb4786ff416854a5674294bf4e9fd33c21898fe695ff9b534f9d1d12f.scope: Deactivated successfully.
Oct  9 09:37:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:37:23 2025: (VI_0) Entering MASTER STATE
Oct  9 09:37:23 compute-0 podman[30837]: 2025-10-09 09:37:23.07444512 +0000 UTC m=+0.032086983 container create 3d0b717a40de86784afeb8fe6bbedce3d4789760dd8d672e34df0fa5aff85a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:37:23 compute-0 systemd[1]: Started libpod-conmon-3d0b717a40de86784afeb8fe6bbedce3d4789760dd8d672e34df0fa5aff85a14.scope.
Oct  9 09:37:23 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e218668f680999728517b5182a0201a1e9627e49aec0b1b84d5a995dd35880fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e218668f680999728517b5182a0201a1e9627e49aec0b1b84d5a995dd35880fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e218668f680999728517b5182a0201a1e9627e49aec0b1b84d5a995dd35880fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e218668f680999728517b5182a0201a1e9627e49aec0b1b84d5a995dd35880fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e218668f680999728517b5182a0201a1e9627e49aec0b1b84d5a995dd35880fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:23 compute-0 podman[30837]: 2025-10-09 09:37:23.127011072 +0000 UTC m=+0.084652944 container init 3d0b717a40de86784afeb8fe6bbedce3d4789760dd8d672e34df0fa5aff85a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_saha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:23 compute-0 podman[30837]: 2025-10-09 09:37:23.131298214 +0000 UTC m=+0.088940076 container start 3d0b717a40de86784afeb8fe6bbedce3d4789760dd8d672e34df0fa5aff85a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:23 compute-0 podman[30837]: 2025-10-09 09:37:23.132969475 +0000 UTC m=+0.090611336 container attach 3d0b717a40de86784afeb8fe6bbedce3d4789760dd8d672e34df0fa5aff85a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_saha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:23 compute-0 podman[30837]: 2025-10-09 09:37:23.059321178 +0000 UTC m=+0.016963060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:23 compute-0 elegant_saha[30850]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:37:23 compute-0 elegant_saha[30850]: --> All data devices are unavailable
Oct  9 09:37:23 compute-0 systemd[1]: libpod-3d0b717a40de86784afeb8fe6bbedce3d4789760dd8d672e34df0fa5aff85a14.scope: Deactivated successfully.
Oct  9 09:37:23 compute-0 conmon[30850]: conmon 3d0b717a40de86784afe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3d0b717a40de86784afeb8fe6bbedce3d4789760dd8d672e34df0fa5aff85a14.scope/container/memory.events
Oct  9 09:37:23 compute-0 podman[30837]: 2025-10-09 09:37:23.397321819 +0000 UTC m=+0.354963681 container died 3d0b717a40de86784afeb8fe6bbedce3d4789760dd8d672e34df0fa5aff85a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e218668f680999728517b5182a0201a1e9627e49aec0b1b84d5a995dd35880fd-merged.mount: Deactivated successfully.
Oct  9 09:37:23 compute-0 podman[30837]: 2025-10-09 09:37:23.418965173 +0000 UTC m=+0.376607035 container remove 3d0b717a40de86784afeb8fe6bbedce3d4789760dd8d672e34df0fa5aff85a14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_saha, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:37:23 compute-0 systemd[1]: libpod-conmon-3d0b717a40de86784afeb8fe6bbedce3d4789760dd8d672e34df0fa5aff85a14.scope: Deactivated successfully.
Oct  9 09:37:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:23 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b40025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:23 compute-0 podman[30955]: 2025-10-09 09:37:23.809938251 +0000 UTC m=+0.026411323 container create 3780cad50bbe2cdff65b7357fac10e278988f3e3f089afd80be22ba7a4ad6078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_elion, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:23.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:23 compute-0 systemd[1]: Started libpod-conmon-3780cad50bbe2cdff65b7357fac10e278988f3e3f089afd80be22ba7a4ad6078.scope.
Oct  9 09:37:23 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:23 compute-0 podman[30955]: 2025-10-09 09:37:23.856903713 +0000 UTC m=+0.073376785 container init 3780cad50bbe2cdff65b7357fac10e278988f3e3f089afd80be22ba7a4ad6078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_elion, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:23 compute-0 podman[30955]: 2025-10-09 09:37:23.861118779 +0000 UTC m=+0.077591852 container start 3780cad50bbe2cdff65b7357fac10e278988f3e3f089afd80be22ba7a4ad6078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:23 compute-0 confident_elion[30968]: 167 167
Oct  9 09:37:23 compute-0 systemd[1]: libpod-3780cad50bbe2cdff65b7357fac10e278988f3e3f089afd80be22ba7a4ad6078.scope: Deactivated successfully.
Oct  9 09:37:23 compute-0 podman[30955]: 2025-10-09 09:37:23.882920382 +0000 UTC m=+0.099393454 container attach 3780cad50bbe2cdff65b7357fac10e278988f3e3f089afd80be22ba7a4ad6078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_elion, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:23 compute-0 podman[30955]: 2025-10-09 09:37:23.883106823 +0000 UTC m=+0.099579905 container died 3780cad50bbe2cdff65b7357fac10e278988f3e3f089afd80be22ba7a4ad6078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_elion, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 09:37:23 compute-0 podman[30955]: 2025-10-09 09:37:23.799390696 +0000 UTC m=+0.015863788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2589ee89da27eb69968c8c72181b2416818a15e1ee123af13bd8162aaf1ff7cf-merged.mount: Deactivated successfully.
Oct  9 09:37:23 compute-0 podman[30955]: 2025-10-09 09:37:23.910901333 +0000 UTC m=+0.127374405 container remove 3780cad50bbe2cdff65b7357fac10e278988f3e3f089afd80be22ba7a4ad6078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_elion, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 09:37:23 compute-0 systemd[1]: libpod-conmon-3780cad50bbe2cdff65b7357fac10e278988f3e3f089afd80be22ba7a4ad6078.scope: Deactivated successfully.
Oct  9 09:37:24 compute-0 podman[30990]: 2025-10-09 09:37:24.020177337 +0000 UTC m=+0.027867107 container create 350706f7edbf6f0e4a2a0567bd38842cfed53ad68c7ff102720b3299ce3f8515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_ramanujan, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 09:37:24 compute-0 systemd[1]: Started libpod-conmon-350706f7edbf6f0e4a2a0567bd38842cfed53ad68c7ff102720b3299ce3f8515.scope.
Oct  9 09:37:24 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f629c5aacf7beed2ffec0b948144cc97f555d6cabe0bb99c7e392c53917f78e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:37:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:24 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Oct  9 09:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f629c5aacf7beed2ffec0b948144cc97f555d6cabe0bb99c7e392c53917f78e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f629c5aacf7beed2ffec0b948144cc97f555d6cabe0bb99c7e392c53917f78e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f629c5aacf7beed2ffec0b948144cc97f555d6cabe0bb99c7e392c53917f78e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:24 compute-0 podman[30990]: 2025-10-09 09:37:24.081524883 +0000 UTC m=+0.089214655 container init 350706f7edbf6f0e4a2a0567bd38842cfed53ad68c7ff102720b3299ce3f8515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_ramanujan, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 09:37:24 compute-0 podman[30990]: 2025-10-09 09:37:24.086110639 +0000 UTC m=+0.093800410 container start 350706f7edbf6f0e4a2a0567bd38842cfed53ad68c7ff102720b3299ce3f8515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_ramanujan, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:24 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b40025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:24 compute-0 podman[30990]: 2025-10-09 09:37:24.08751166 +0000 UTC m=+0.095201431 container attach 350706f7edbf6f0e4a2a0567bd38842cfed53ad68c7ff102720b3299ce3f8515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_ramanujan, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:24 compute-0 podman[30990]: 2025-10-09 09:37:24.009196976 +0000 UTC m=+0.016886767 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v23: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]: {
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:    "1": [
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:        {
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "devices": [
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "/dev/loop3"
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            ],
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "lv_name": "ceph_lv0",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "lv_size": "21470642176",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "name": "ceph_lv0",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "tags": {
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.cluster_name": "ceph",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.crush_device_class": "",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.encrypted": "0",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.osd_id": "1",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.type": "block",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.vdo": "0",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:                "ceph.with_tpm": "0"
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            },
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "type": "block",
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:            "vg_name": "ceph_vg0"
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:        }
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]:    ]
Oct  9 09:37:24 compute-0 cool_ramanujan[31004]: }
Oct  9 09:37:24 compute-0 systemd[1]: libpod-350706f7edbf6f0e4a2a0567bd38842cfed53ad68c7ff102720b3299ce3f8515.scope: Deactivated successfully.
Oct  9 09:37:24 compute-0 podman[30990]: 2025-10-09 09:37:24.314309524 +0000 UTC m=+0.321999295 container died 350706f7edbf6f0e4a2a0567bd38842cfed53ad68c7ff102720b3299ce3f8515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 09:37:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f629c5aacf7beed2ffec0b948144cc97f555d6cabe0bb99c7e392c53917f78e9-merged.mount: Deactivated successfully.
Oct  9 09:37:24 compute-0 podman[30990]: 2025-10-09 09:37:24.400356045 +0000 UTC m=+0.408045817 container remove 350706f7edbf6f0e4a2a0567bd38842cfed53ad68c7ff102720b3299ce3f8515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:24 compute-0 systemd[1]: libpod-conmon-350706f7edbf6f0e4a2a0567bd38842cfed53ad68c7ff102720b3299ce3f8515.scope: Deactivated successfully.
Oct  9 09:37:24 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 15 completed events
Oct  9 09:37:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:37:24 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:24.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:24 compute-0 podman[31104]: 2025-10-09 09:37:24.807220612 +0000 UTC m=+0.028975668 container create bd2c1ed44dd0a8089ea1a88beb107cf5e724258c35fec446be64eabf49ee32d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermi, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:24 compute-0 systemd[1]: Started libpod-conmon-bd2c1ed44dd0a8089ea1a88beb107cf5e724258c35fec446be64eabf49ee32d7.scope.
Oct  9 09:37:24 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:24 compute-0 podman[31104]: 2025-10-09 09:37:24.859470406 +0000 UTC m=+0.081225463 container init bd2c1ed44dd0a8089ea1a88beb107cf5e724258c35fec446be64eabf49ee32d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermi, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:24 compute-0 podman[31104]: 2025-10-09 09:37:24.863704239 +0000 UTC m=+0.085459295 container start bd2c1ed44dd0a8089ea1a88beb107cf5e724258c35fec446be64eabf49ee32d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:24 compute-0 podman[31104]: 2025-10-09 09:37:24.864824291 +0000 UTC m=+0.086579347 container attach bd2c1ed44dd0a8089ea1a88beb107cf5e724258c35fec446be64eabf49ee32d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermi, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Oct  9 09:37:24 compute-0 friendly_fermi[31117]: 167 167
Oct  9 09:37:24 compute-0 systemd[1]: libpod-bd2c1ed44dd0a8089ea1a88beb107cf5e724258c35fec446be64eabf49ee32d7.scope: Deactivated successfully.
Oct  9 09:37:24 compute-0 conmon[31117]: conmon bd2c1ed44dd0a8089ea1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd2c1ed44dd0a8089ea1a88beb107cf5e724258c35fec446be64eabf49ee32d7.scope/container/memory.events
Oct  9 09:37:24 compute-0 podman[31104]: 2025-10-09 09:37:24.867237901 +0000 UTC m=+0.088992947 container died bd2c1ed44dd0a8089ea1a88beb107cf5e724258c35fec446be64eabf49ee32d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:37:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f112139c6376067bb3e268761a9094b5f112c5d2944d1ec144005bd808462a32-merged.mount: Deactivated successfully.
Oct  9 09:37:24 compute-0 podman[31104]: 2025-10-09 09:37:24.883867692 +0000 UTC m=+0.105622748 container remove bd2c1ed44dd0a8089ea1a88beb107cf5e724258c35fec446be64eabf49ee32d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:24 compute-0 podman[31104]: 2025-10-09 09:37:24.796417174 +0000 UTC m=+0.018172250 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:24 compute-0 systemd[1]: libpod-conmon-bd2c1ed44dd0a8089ea1a88beb107cf5e724258c35fec446be64eabf49ee32d7.scope: Deactivated successfully.
Oct  9 09:37:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:24 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac0045c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:24 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Oct  9 09:37:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:37:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:24 compute-0 podman[31140]: 2025-10-09 09:37:24.997870658 +0000 UTC m=+0.028642189 container create 242c3039dd84401ed03194395f3bb6e8a4eea90ffd6e804bca7b20aa96257ba1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:37:25 compute-0 systemd[1]: Started libpod-conmon-242c3039dd84401ed03194395f3bb6e8a4eea90ffd6e804bca7b20aa96257ba1.scope.
Oct  9 09:37:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b748032ef26d0a848f24d15816082c61c7baf4d3a889aca30e3dd13357fe718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b748032ef26d0a848f24d15816082c61c7baf4d3a889aca30e3dd13357fe718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b748032ef26d0a848f24d15816082c61c7baf4d3a889aca30e3dd13357fe718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b748032ef26d0a848f24d15816082c61c7baf4d3a889aca30e3dd13357fe718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:25 compute-0 podman[31140]: 2025-10-09 09:37:25.061824548 +0000 UTC m=+0.092596089 container init 242c3039dd84401ed03194395f3bb6e8a4eea90ffd6e804bca7b20aa96257ba1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:37:25 compute-0 podman[31140]: 2025-10-09 09:37:25.066609489 +0000 UTC m=+0.097381011 container start 242c3039dd84401ed03194395f3bb6e8a4eea90ffd6e804bca7b20aa96257ba1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_germain, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 09:37:25 compute-0 podman[31140]: 2025-10-09 09:37:25.067794684 +0000 UTC m=+0.098566215 container attach 242c3039dd84401ed03194395f3bb6e8a4eea90ffd6e804bca7b20aa96257ba1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_germain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:37:25 compute-0 podman[31140]: 2025-10-09 09:37:24.986955892 +0000 UTC m=+0.017727433 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:25 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac0045c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:25 compute-0 lvm[31228]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:37:25 compute-0 lvm[31228]: VG ceph_vg0 finished
Oct  9 09:37:25 compute-0 hardcore_germain[31153]: {}
Oct  9 09:37:25 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:25 compute-0 lvm[31231]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:37:25 compute-0 lvm[31231]: VG ceph_vg0 finished
Oct  9 09:37:25 compute-0 systemd[1]: libpod-242c3039dd84401ed03194395f3bb6e8a4eea90ffd6e804bca7b20aa96257ba1.scope: Deactivated successfully.
Oct  9 09:37:25 compute-0 podman[31140]: 2025-10-09 09:37:25.580359514 +0000 UTC m=+0.611131045 container died 242c3039dd84401ed03194395f3bb6e8a4eea90ffd6e804bca7b20aa96257ba1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_germain, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b748032ef26d0a848f24d15816082c61c7baf4d3a889aca30e3dd13357fe718-merged.mount: Deactivated successfully.
Oct  9 09:37:25 compute-0 podman[31140]: 2025-10-09 09:37:25.602253991 +0000 UTC m=+0.633025522 container remove 242c3039dd84401ed03194395f3bb6e8a4eea90ffd6e804bca7b20aa96257ba1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 09:37:25 compute-0 systemd[1]: libpod-conmon-242c3039dd84401ed03194395f3bb6e8a4eea90ffd6e804bca7b20aa96257ba1.scope: Deactivated successfully.
Oct  9 09:37:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:37:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:25.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:37:25 compute-0 systemd-logind[798]: New session 22 of user zuul.
Oct  9 09:37:25 compute-0 systemd[1]: Started Session 22 of User zuul.
Oct  9 09:37:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:25 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Oct  9 09:37:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:37:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:26 2025: (VI_0) Entering MASTER STATE
Oct  9 09:37:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:26 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac0045c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v24: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  9 09:37:26 compute-0 podman[31430]: 2025-10-09 09:37:26.280506717 +0000 UTC m=+0.036072958 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:37:26 compute-0 podman[31472]: 2025-10-09 09:37:26.409228182 +0000 UTC m=+0.044059555 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 09:37:26 compute-0 podman[31430]: 2025-10-09 09:37:26.412517031 +0000 UTC m=+0.168083263 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:26.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:26 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:26 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:26 compute-0 podman[31623]: 2025-10-09 09:37:26.70963599 +0000 UTC m=+0.033765369 container exec f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:26 compute-0 podman[31623]: 2025-10-09 09:37:26.717290391 +0000 UTC m=+0.041419748 container exec_died f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha[25986]: Thu Oct  9 09:37:26 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Oct  9 09:37:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-0-qjivil[30713]: Thu Oct  9 09:37:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:26 compute-0 python3.9[31574]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:37:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:26 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac0045c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:26 compute-0 podman[31720]: 2025-10-09 09:37:26.96571187 +0000 UTC m=+0.035592802 container exec bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:26 compute-0 podman[31720]: 2025-10-09 09:37:26.987408845 +0000 UTC m=+0.057289767 container exec_died bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:27 compute-0 podman[31787]: 2025-10-09 09:37:27.126695567 +0000 UTC m=+0.040157138 container exec 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:27 compute-0 podman[31787]: 2025-10-09 09:37:27.244057967 +0000 UTC m=+0.157519537 container exec_died 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:37:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:37:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:27 compute-0 podman[31886]: 2025-10-09 09:37:27.431409354 +0000 UTC m=+0.035209810 container exec 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:37:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:27 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac0045c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:27 compute-0 podman[31886]: 2025-10-09 09:37:27.441322383 +0000 UTC m=+0.045122818 container exec_died 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:37:27 compute-0 podman[31960]: 2025-10-09 09:37:27.576841822 +0000 UTC m=+0.034456932 container exec 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.expose-services=, version=2.2.4, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, name=keepalived)
Oct  9 09:37:27 compute-0 podman[31960]: 2025-10-09 09:37:27.582127839 +0000 UTC m=+0.039742958 container exec_died 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, vcs-type=git, io.buildah.version=1.28.2, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, name=keepalived, release=1793)
Oct  9 09:37:27 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:27 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:27 compute-0 podman[32063]: 2025-10-09 09:37:27.724723168 +0000 UTC m=+0.034122602 container exec ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:27 compute-0 podman[32063]: 2025-10-09 09:37:27.753368983 +0000 UTC m=+0.062768416 container exec_died ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:37:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:37:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:27.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:37:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:37:27 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:27 compute-0 podman[32111]: 2025-10-09 09:37:27.863126866 +0000 UTC m=+0.034605321 container exec ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:27 compute-0 podman[32111]: 2025-10-09 09:37:27.870399056 +0000 UTC m=+0.041877491 container exec_died ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:37:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:28 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:37:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:37:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:37:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:37:28 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:37:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:37:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:28 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:28 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4003ba0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:28 compute-0 python3.9[32245]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:37:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:28 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v25: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  9 09:37:28 compute-0 podman[32358]: 2025-10-09 09:37:28.42447187 +0000 UTC m=+0.027050218 container create 5075d4e26f9d049d0cb22c4b221c6d7e74454bd2f30bcc90529f8c35c19a6880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:28 compute-0 systemd[1]: Started libpod-conmon-5075d4e26f9d049d0cb22c4b221c6d7e74454bd2f30bcc90529f8c35c19a6880.scope.
Oct  9 09:37:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:28 compute-0 podman[32358]: 2025-10-09 09:37:28.464824356 +0000 UTC m=+0.067402693 container init 5075d4e26f9d049d0cb22c4b221c6d7e74454bd2f30bcc90529f8c35c19a6880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:28 compute-0 podman[32358]: 2025-10-09 09:37:28.469548041 +0000 UTC m=+0.072126379 container start 5075d4e26f9d049d0cb22c4b221c6d7e74454bd2f30bcc90529f8c35c19a6880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:37:28 compute-0 podman[32358]: 2025-10-09 09:37:28.470514463 +0000 UTC m=+0.073092801 container attach 5075d4e26f9d049d0cb22c4b221c6d7e74454bd2f30bcc90529f8c35c19a6880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 09:37:28 compute-0 youthful_knuth[32372]: 167 167
Oct  9 09:37:28 compute-0 systemd[1]: libpod-5075d4e26f9d049d0cb22c4b221c6d7e74454bd2f30bcc90529f8c35c19a6880.scope: Deactivated successfully.
Oct  9 09:37:28 compute-0 podman[32358]: 2025-10-09 09:37:28.47344103 +0000 UTC m=+0.076019389 container died 5075d4e26f9d049d0cb22c4b221c6d7e74454bd2f30bcc90529f8c35c19a6880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-51526042bdb39fb07d2a90fbff3f4662b4071e25cf9eb4d979cc14f821cedd64-merged.mount: Deactivated successfully.
Oct  9 09:37:28 compute-0 podman[32358]: 2025-10-09 09:37:28.490072184 +0000 UTC m=+0.092650522 container remove 5075d4e26f9d049d0cb22c4b221c6d7e74454bd2f30bcc90529f8c35c19a6880 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:28 compute-0 podman[32358]: 2025-10-09 09:37:28.414015516 +0000 UTC m=+0.016593864 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:28 compute-0 systemd[1]: libpod-conmon-5075d4e26f9d049d0cb22c4b221c6d7e74454bd2f30bcc90529f8c35c19a6880.scope: Deactivated successfully.
Oct  9 09:37:28 compute-0 podman[32395]: 2025-10-09 09:37:28.603350183 +0000 UTC m=+0.029369931 container create 3e2fd8783f1ef900bec313f49faa4474c4a75c8b67ece1e7e9ff93a4187b1689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_meitner, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:28.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:28 compute-0 systemd[1]: Started libpod-conmon-3e2fd8783f1ef900bec313f49faa4474c4a75c8b67ece1e7e9ff93a4187b1689.scope.
Oct  9 09:37:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a751e1bb57be22b3f12448e0b843362e1ddb6d815c2140c6fdeb3c68e53568c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a751e1bb57be22b3f12448e0b843362e1ddb6d815c2140c6fdeb3c68e53568c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a751e1bb57be22b3f12448e0b843362e1ddb6d815c2140c6fdeb3c68e53568c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a751e1bb57be22b3f12448e0b843362e1ddb6d815c2140c6fdeb3c68e53568c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a751e1bb57be22b3f12448e0b843362e1ddb6d815c2140c6fdeb3c68e53568c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:28 compute-0 podman[32395]: 2025-10-09 09:37:28.663839712 +0000 UTC m=+0.089859470 container init 3e2fd8783f1ef900bec313f49faa4474c4a75c8b67ece1e7e9ff93a4187b1689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_meitner, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 09:37:28 compute-0 podman[32395]: 2025-10-09 09:37:28.668694315 +0000 UTC m=+0.094714063 container start 3e2fd8783f1ef900bec313f49faa4474c4a75c8b67ece1e7e9ff93a4187b1689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Oct  9 09:37:28 compute-0 podman[32395]: 2025-10-09 09:37:28.669828783 +0000 UTC m=+0.095848531 container attach 3e2fd8783f1ef900bec313f49faa4474c4a75c8b67ece1e7e9ff93a4187b1689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:28 compute-0 podman[32395]: 2025-10-09 09:37:28.591591345 +0000 UTC m=+0.017611103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:28 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4003ba0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:28 compute-0 crazy_meitner[32408]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:37:28 compute-0 crazy_meitner[32408]: --> All data devices are unavailable
Oct  9 09:37:28 compute-0 systemd[1]: libpod-3e2fd8783f1ef900bec313f49faa4474c4a75c8b67ece1e7e9ff93a4187b1689.scope: Deactivated successfully.
Oct  9 09:37:28 compute-0 conmon[32408]: conmon 3e2fd8783f1ef900bec3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e2fd8783f1ef900bec313f49faa4474c4a75c8b67ece1e7e9ff93a4187b1689.scope/container/memory.events
Oct  9 09:37:28 compute-0 podman[32395]: 2025-10-09 09:37:28.944982591 +0000 UTC m=+0.371002339 container died 3e2fd8783f1ef900bec313f49faa4474c4a75c8b67ece1e7e9ff93a4187b1689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_meitner, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct  9 09:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a751e1bb57be22b3f12448e0b843362e1ddb6d815c2140c6fdeb3c68e53568c-merged.mount: Deactivated successfully.
Oct  9 09:37:28 compute-0 podman[32395]: 2025-10-09 09:37:28.967649985 +0000 UTC m=+0.393669733 container remove 3e2fd8783f1ef900bec313f49faa4474c4a75c8b67ece1e7e9ff93a4187b1689 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:28 compute-0 systemd[1]: libpod-conmon-3e2fd8783f1ef900bec313f49faa4474c4a75c8b67ece1e7e9ff93a4187b1689.scope: Deactivated successfully.
Oct  9 09:37:29 compute-0 podman[32518]: 2025-10-09 09:37:29.378389978 +0000 UTC m=+0.027439360 container create 560da39fc203ba429ba7bf65d0b4985bcdf8f4ee15dd20d0f2f4c2b9b2f203ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:29 compute-0 systemd[1]: Started libpod-conmon-560da39fc203ba429ba7bf65d0b4985bcdf8f4ee15dd20d0f2f4c2b9b2f203ea.scope.
Oct  9 09:37:29 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:29 compute-0 podman[32518]: 2025-10-09 09:37:29.422745251 +0000 UTC m=+0.071794622 container init 560da39fc203ba429ba7bf65d0b4985bcdf8f4ee15dd20d0f2f4c2b9b2f203ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_darwin, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 09:37:29 compute-0 podman[32518]: 2025-10-09 09:37:29.427710881 +0000 UTC m=+0.076760253 container start 560da39fc203ba429ba7bf65d0b4985bcdf8f4ee15dd20d0f2f4c2b9b2f203ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_darwin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:37:29 compute-0 objective_darwin[32531]: 167 167
Oct  9 09:37:29 compute-0 podman[32518]: 2025-10-09 09:37:29.429674073 +0000 UTC m=+0.078723444 container attach 560da39fc203ba429ba7bf65d0b4985bcdf8f4ee15dd20d0f2f4c2b9b2f203ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_darwin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:29 compute-0 systemd[1]: libpod-560da39fc203ba429ba7bf65d0b4985bcdf8f4ee15dd20d0f2f4c2b9b2f203ea.scope: Deactivated successfully.
Oct  9 09:37:29 compute-0 podman[32518]: 2025-10-09 09:37:29.431746389 +0000 UTC m=+0.080795781 container died 560da39fc203ba429ba7bf65d0b4985bcdf8f4ee15dd20d0f2f4c2b9b2f203ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_darwin, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:29 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4003ba0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-617f42d6f99a51c6c1b852e63656c569c2b9a89f41ef242c7112a0b9dc57364c-merged.mount: Deactivated successfully.
Oct  9 09:37:29 compute-0 podman[32518]: 2025-10-09 09:37:29.453697383 +0000 UTC m=+0.102746755 container remove 560da39fc203ba429ba7bf65d0b4985bcdf8f4ee15dd20d0f2f4c2b9b2f203ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_darwin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Oct  9 09:37:29 compute-0 podman[32518]: 2025-10-09 09:37:29.366680182 +0000 UTC m=+0.015729575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:29 compute-0 systemd[1]: libpod-conmon-560da39fc203ba429ba7bf65d0b4985bcdf8f4ee15dd20d0f2f4c2b9b2f203ea.scope: Deactivated successfully.
Oct  9 09:37:29 compute-0 podman[32552]: 2025-10-09 09:37:29.567832558 +0000 UTC m=+0.029062973 container create ec5faca90a21aedbc43f6041b08c646093a107fa67fe523848ce7edbb380c4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 09:37:29 compute-0 systemd[1]: Started libpod-conmon-ec5faca90a21aedbc43f6041b08c646093a107fa67fe523848ce7edbb380c4d8.scope.
Oct  9 09:37:29 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236aca4e5eb71ee84ada92e6a391f78bde76df0d596dff5c2950e1a43d51a8ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236aca4e5eb71ee84ada92e6a391f78bde76df0d596dff5c2950e1a43d51a8ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236aca4e5eb71ee84ada92e6a391f78bde76df0d596dff5c2950e1a43d51a8ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236aca4e5eb71ee84ada92e6a391f78bde76df0d596dff5c2950e1a43d51a8ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:29 compute-0 podman[32552]: 2025-10-09 09:37:29.6212051 +0000 UTC m=+0.082435535 container init ec5faca90a21aedbc43f6041b08c646093a107fa67fe523848ce7edbb380c4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_booth, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:29 compute-0 podman[32552]: 2025-10-09 09:37:29.625937462 +0000 UTC m=+0.087167877 container start ec5faca90a21aedbc43f6041b08c646093a107fa67fe523848ce7edbb380c4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 09:37:29 compute-0 podman[32552]: 2025-10-09 09:37:29.627513192 +0000 UTC m=+0.088743607 container attach ec5faca90a21aedbc43f6041b08c646093a107fa67fe523848ce7edbb380c4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_booth, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 09:37:29 compute-0 podman[32552]: 2025-10-09 09:37:29.556002416 +0000 UTC m=+0.017232841 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:29.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:29 compute-0 zealous_booth[32565]: {
Oct  9 09:37:29 compute-0 zealous_booth[32565]:    "1": [
Oct  9 09:37:29 compute-0 zealous_booth[32565]:        {
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "devices": [
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "/dev/loop3"
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            ],
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "lv_name": "ceph_lv0",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "lv_size": "21470642176",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "name": "ceph_lv0",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "tags": {
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.cluster_name": "ceph",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.crush_device_class": "",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.encrypted": "0",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.osd_id": "1",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.type": "block",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.vdo": "0",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:                "ceph.with_tpm": "0"
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            },
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "type": "block",
Oct  9 09:37:29 compute-0 zealous_booth[32565]:            "vg_name": "ceph_vg0"
Oct  9 09:37:29 compute-0 zealous_booth[32565]:        }
Oct  9 09:37:29 compute-0 zealous_booth[32565]:    ]
Oct  9 09:37:29 compute-0 zealous_booth[32565]: }
Oct  9 09:37:29 compute-0 systemd[1]: libpod-ec5faca90a21aedbc43f6041b08c646093a107fa67fe523848ce7edbb380c4d8.scope: Deactivated successfully.
Oct  9 09:37:29 compute-0 podman[32552]: 2025-10-09 09:37:29.873070473 +0000 UTC m=+0.334300877 container died ec5faca90a21aedbc43f6041b08c646093a107fa67fe523848ce7edbb380c4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 09:37:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-236aca4e5eb71ee84ada92e6a391f78bde76df0d596dff5c2950e1a43d51a8ab-merged.mount: Deactivated successfully.
Oct  9 09:37:29 compute-0 podman[32552]: 2025-10-09 09:37:29.896070394 +0000 UTC m=+0.357300808 container remove ec5faca90a21aedbc43f6041b08c646093a107fa67fe523848ce7edbb380c4d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Oct  9 09:37:29 compute-0 systemd[1]: libpod-conmon-ec5faca90a21aedbc43f6041b08c646093a107fa67fe523848ce7edbb380c4d8.scope: Deactivated successfully.
Oct  9 09:37:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:30 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4003ba0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v26: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct  9 09:37:30 compute-0 podman[32667]: 2025-10-09 09:37:30.320484136 +0000 UTC m=+0.029269621 container create 104428e74d444a0998c667310e245e0ed9465d8ed850f3366279f70f1253809f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Oct  9 09:37:30 compute-0 systemd[1]: Started libpod-conmon-104428e74d444a0998c667310e245e0ed9465d8ed850f3366279f70f1253809f.scope.
Oct  9 09:37:30 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:30 compute-0 podman[32667]: 2025-10-09 09:37:30.369830366 +0000 UTC m=+0.078615852 container init 104428e74d444a0998c667310e245e0ed9465d8ed850f3366279f70f1253809f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wright, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  9 09:37:30 compute-0 podman[32667]: 2025-10-09 09:37:30.37554456 +0000 UTC m=+0.084330045 container start 104428e74d444a0998c667310e245e0ed9465d8ed850f3366279f70f1253809f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wright, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:37:30 compute-0 podman[32667]: 2025-10-09 09:37:30.376776011 +0000 UTC m=+0.085561516 container attach 104428e74d444a0998c667310e245e0ed9465d8ed850f3366279f70f1253809f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wright, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:37:30 compute-0 gracious_wright[32680]: 167 167
Oct  9 09:37:30 compute-0 systemd[1]: libpod-104428e74d444a0998c667310e245e0ed9465d8ed850f3366279f70f1253809f.scope: Deactivated successfully.
Oct  9 09:37:30 compute-0 podman[32667]: 2025-10-09 09:37:30.378404902 +0000 UTC m=+0.087190388 container died 104428e74d444a0998c667310e245e0ed9465d8ed850f3366279f70f1253809f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:37:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6660bf0372d1f17638dad23e50d2d68ec317f5d44446197955e0acd12fe90b5-merged.mount: Deactivated successfully.
Oct  9 09:37:30 compute-0 podman[32667]: 2025-10-09 09:37:30.396460722 +0000 UTC m=+0.105246206 container remove 104428e74d444a0998c667310e245e0ed9465d8ed850f3366279f70f1253809f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_wright, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:30 compute-0 podman[32667]: 2025-10-09 09:37:30.307783823 +0000 UTC m=+0.016569308 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:30 compute-0 systemd[1]: libpod-conmon-104428e74d444a0998c667310e245e0ed9465d8ed850f3366279f70f1253809f.scope: Deactivated successfully.
Oct  9 09:37:30 compute-0 podman[32704]: 2025-10-09 09:37:30.517209366 +0000 UTC m=+0.032614658 container create c8c939a8107fed7f4c46b610f4b84d934a9eae6722162f514f2a1f1a8c6fafff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 09:37:30 compute-0 systemd[1]: Started libpod-conmon-c8c939a8107fed7f4c46b610f4b84d934a9eae6722162f514f2a1f1a8c6fafff.scope.
Oct  9 09:37:30 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2dd29d23fa3943a2ba87faa76890ed6d1f797cbaba6ce6de684c090c6a00c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2dd29d23fa3943a2ba87faa76890ed6d1f797cbaba6ce6de684c090c6a00c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2dd29d23fa3943a2ba87faa76890ed6d1f797cbaba6ce6de684c090c6a00c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2dd29d23fa3943a2ba87faa76890ed6d1f797cbaba6ce6de684c090c6a00c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:30 compute-0 podman[32704]: 2025-10-09 09:37:30.573681941 +0000 UTC m=+0.089087234 container init c8c939a8107fed7f4c46b610f4b84d934a9eae6722162f514f2a1f1a8c6fafff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hamilton, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:37:30 compute-0 podman[32704]: 2025-10-09 09:37:30.57962167 +0000 UTC m=+0.095026962 container start c8c939a8107fed7f4c46b610f4b84d934a9eae6722162f514f2a1f1a8c6fafff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:30 compute-0 podman[32704]: 2025-10-09 09:37:30.587169678 +0000 UTC m=+0.102574982 container attach c8c939a8107fed7f4c46b610f4b84d934a9eae6722162f514f2a1f1a8c6fafff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hamilton, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:30 compute-0 podman[32704]: 2025-10-09 09:37:30.502943931 +0000 UTC m=+0.018349244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:30.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:30 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b80089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:31 compute-0 lvm[32792]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:37:31 compute-0 lvm[32792]: VG ceph_vg0 finished
Oct  9 09:37:31 compute-0 lucid_hamilton[32716]: {}
Oct  9 09:37:31 compute-0 podman[32704]: 2025-10-09 09:37:31.078945277 +0000 UTC m=+0.594350570 container died c8c939a8107fed7f4c46b610f4b84d934a9eae6722162f514f2a1f1a8c6fafff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:31 compute-0 systemd[1]: libpod-c8c939a8107fed7f4c46b610f4b84d934a9eae6722162f514f2a1f1a8c6fafff.scope: Deactivated successfully.
Oct  9 09:37:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b2dd29d23fa3943a2ba87faa76890ed6d1f797cbaba6ce6de684c090c6a00c4-merged.mount: Deactivated successfully.
Oct  9 09:37:31 compute-0 podman[32704]: 2025-10-09 09:37:31.101476052 +0000 UTC m=+0.616881346 container remove c8c939a8107fed7f4c46b610f4b84d934a9eae6722162f514f2a1f1a8c6fafff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:37:31 compute-0 systemd[1]: libpod-conmon-c8c939a8107fed7f4c46b610f4b84d934a9eae6722162f514f2a1f1a8c6fafff.scope: Deactivated successfully.
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:31 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:37:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:31 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:31 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:31 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:31 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 09:37:31 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 09:37:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:31 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac006430 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:31 compute-0 podman[32899]: 2025-10-09 09:37:31.621831111 +0000 UTC m=+0.035539694 container create 50cee4339ca838d1b29bac035283de42a8edb5f36e7a5ba2dd118f27df372f53 (image=quay.io/ceph/ceph:v19, name=elastic_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:31 compute-0 systemd[1]: Started libpod-conmon-50cee4339ca838d1b29bac035283de42a8edb5f36e7a5ba2dd118f27df372f53.scope.
Oct  9 09:37:31 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:31 compute-0 podman[32899]: 2025-10-09 09:37:31.669533833 +0000 UTC m=+0.083242426 container init 50cee4339ca838d1b29bac035283de42a8edb5f36e7a5ba2dd118f27df372f53 (image=quay.io/ceph/ceph:v19, name=elastic_buck, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:37:31 compute-0 podman[32899]: 2025-10-09 09:37:31.675313539 +0000 UTC m=+0.089022111 container start 50cee4339ca838d1b29bac035283de42a8edb5f36e7a5ba2dd118f27df372f53 (image=quay.io/ceph/ceph:v19, name=elastic_buck, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:37:31 compute-0 podman[32899]: 2025-10-09 09:37:31.677228349 +0000 UTC m=+0.090937012 container attach 50cee4339ca838d1b29bac035283de42a8edb5f36e7a5ba2dd118f27df372f53 (image=quay.io/ceph/ceph:v19, name=elastic_buck, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 09:37:31 compute-0 elastic_buck[32913]: 167 167
Oct  9 09:37:31 compute-0 systemd[1]: libpod-50cee4339ca838d1b29bac035283de42a8edb5f36e7a5ba2dd118f27df372f53.scope: Deactivated successfully.
Oct  9 09:37:31 compute-0 podman[32899]: 2025-10-09 09:37:31.678695665 +0000 UTC m=+0.092404238 container died 50cee4339ca838d1b29bac035283de42a8edb5f36e7a5ba2dd118f27df372f53 (image=quay.io/ceph/ceph:v19, name=elastic_buck, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3885ed957a484eef5c880a391aef47ef48aff3dff0ed276f567f487c0e92648e-merged.mount: Deactivated successfully.
Oct  9 09:37:31 compute-0 podman[32899]: 2025-10-09 09:37:31.698582476 +0000 UTC m=+0.112291049 container remove 50cee4339ca838d1b29bac035283de42a8edb5f36e7a5ba2dd118f27df372f53 (image=quay.io/ceph/ceph:v19, name=elastic_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:37:31 compute-0 podman[32899]: 2025-10-09 09:37:31.610061562 +0000 UTC m=+0.023770155 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:37:31 compute-0 systemd[1]: libpod-conmon-50cee4339ca838d1b29bac035283de42a8edb5f36e7a5ba2dd118f27df372f53.scope: Deactivated successfully.
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:31 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.lwqgfy (monmap changed)...
Oct  9 09:37:31 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.lwqgfy (monmap changed)...
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.lwqgfy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.lwqgfy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 09:37:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:31 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:31 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.lwqgfy on compute-0
Oct  9 09:37:31 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.lwqgfy on compute-0
Oct  9 09:37:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:31.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:32 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:32 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.lwqgfy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:37:32 compute-0 podman[32997]: 2025-10-09 09:37:32.071698589 +0000 UTC m=+0.027572010 container create 0aecf4294122eaebceb2421410760de2be4b217d243e4ad203212e7a5a2cfa66 (image=quay.io/ceph/ceph:v19, name=quirky_khorana, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:32 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac006430 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:32 compute-0 systemd[1]: Started libpod-conmon-0aecf4294122eaebceb2421410760de2be4b217d243e4ad203212e7a5a2cfa66.scope.
Oct  9 09:37:32 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:32 compute-0 podman[32997]: 2025-10-09 09:37:32.12442891 +0000 UTC m=+0.080302331 container init 0aecf4294122eaebceb2421410760de2be4b217d243e4ad203212e7a5a2cfa66 (image=quay.io/ceph/ceph:v19, name=quirky_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:32 compute-0 podman[32997]: 2025-10-09 09:37:32.128883378 +0000 UTC m=+0.084756799 container start 0aecf4294122eaebceb2421410760de2be4b217d243e4ad203212e7a5a2cfa66 (image=quay.io/ceph/ceph:v19, name=quirky_khorana, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:32 compute-0 podman[32997]: 2025-10-09 09:37:32.13007786 +0000 UTC m=+0.085951281 container attach 0aecf4294122eaebceb2421410760de2be4b217d243e4ad203212e7a5a2cfa66 (image=quay.io/ceph/ceph:v19, name=quirky_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 09:37:32 compute-0 quirky_khorana[33011]: 167 167
Oct  9 09:37:32 compute-0 systemd[1]: libpod-0aecf4294122eaebceb2421410760de2be4b217d243e4ad203212e7a5a2cfa66.scope: Deactivated successfully.
Oct  9 09:37:32 compute-0 conmon[33011]: conmon 0aecf4294122eaebceb2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0aecf4294122eaebceb2421410760de2be4b217d243e4ad203212e7a5a2cfa66.scope/container/memory.events
Oct  9 09:37:32 compute-0 podman[32997]: 2025-10-09 09:37:32.132097919 +0000 UTC m=+0.087971340 container died 0aecf4294122eaebceb2421410760de2be4b217d243e4ad203212e7a5a2cfa66 (image=quay.io/ceph/ceph:v19, name=quirky_khorana, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 09:37:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa9355525d5e3917b433fef3e523340af6faf5b2a1a69658f5b9c1596fe69d38-merged.mount: Deactivated successfully.
Oct  9 09:37:32 compute-0 podman[32997]: 2025-10-09 09:37:32.153526447 +0000 UTC m=+0.109399868 container remove 0aecf4294122eaebceb2421410760de2be4b217d243e4ad203212e7a5a2cfa66 (image=quay.io/ceph/ceph:v19, name=quirky_khorana, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:32 compute-0 podman[32997]: 2025-10-09 09:37:32.060541837 +0000 UTC m=+0.016415277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Oct  9 09:37:32 compute-0 systemd[1]: libpod-conmon-0aecf4294122eaebceb2421410760de2be4b217d243e4ad203212e7a5a2cfa66.scope: Deactivated successfully.
Oct  9 09:37:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:32 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:32 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Oct  9 09:37:32 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Oct  9 09:37:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  9 09:37:32 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:37:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:32 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:32 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Oct  9 09:37:32 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Oct  9 09:37:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v27: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct  9 09:37:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:32] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Oct  9 09:37:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:32] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Oct  9 09:37:32 compute-0 podman[33091]: 2025-10-09 09:37:32.513316206 +0000 UTC m=+0.028909122 container create dbbc6e31204b01be9ad29fb849e59e70fdf838d7e7844f616d9e4a44493eab56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:37:32 compute-0 systemd[1]: Started libpod-conmon-dbbc6e31204b01be9ad29fb849e59e70fdf838d7e7844f616d9e4a44493eab56.scope.
Oct  9 09:37:32 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:32 compute-0 podman[33091]: 2025-10-09 09:37:32.569048386 +0000 UTC m=+0.084641322 container init dbbc6e31204b01be9ad29fb849e59e70fdf838d7e7844f616d9e4a44493eab56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 09:37:32 compute-0 podman[33091]: 2025-10-09 09:37:32.572909395 +0000 UTC m=+0.088502311 container start dbbc6e31204b01be9ad29fb849e59e70fdf838d7e7844f616d9e4a44493eab56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ptolemy, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:32 compute-0 podman[33091]: 2025-10-09 09:37:32.573952071 +0000 UTC m=+0.089544987 container attach dbbc6e31204b01be9ad29fb849e59e70fdf838d7e7844f616d9e4a44493eab56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ptolemy, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 09:37:32 compute-0 fervent_ptolemy[33104]: 167 167
Oct  9 09:37:32 compute-0 podman[33091]: 2025-10-09 09:37:32.576009049 +0000 UTC m=+0.091601965 container died dbbc6e31204b01be9ad29fb849e59e70fdf838d7e7844f616d9e4a44493eab56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ptolemy, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 09:37:32 compute-0 systemd[1]: libpod-dbbc6e31204b01be9ad29fb849e59e70fdf838d7e7844f616d9e4a44493eab56.scope: Deactivated successfully.
Oct  9 09:37:32 compute-0 podman[33091]: 2025-10-09 09:37:32.591697045 +0000 UTC m=+0.107289960 container remove dbbc6e31204b01be9ad29fb849e59e70fdf838d7e7844f616d9e4a44493eab56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_ptolemy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 09:37:32 compute-0 podman[33091]: 2025-10-09 09:37:32.500198646 +0000 UTC m=+0.015791572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:32.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:32 compute-0 systemd[1]: libpod-conmon-dbbc6e31204b01be9ad29fb849e59e70fdf838d7e7844f616d9e4a44493eab56.scope: Deactivated successfully.
Oct  9 09:37:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-41340d4a69f22717127aef05c8517162607e99699869ebf29b6f8dc04618017c-merged.mount: Deactivated successfully.
Oct  9 09:37:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:32 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:32 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Oct  9 09:37:32 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Oct  9 09:37:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Oct  9 09:37:32 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  9 09:37:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:32 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:32 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Oct  9 09:37:32 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Oct  9 09:37:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:32 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:32 compute-0 podman[33183]: 2025-10-09 09:37:32.95559561 +0000 UTC m=+0.026898411 container create b89d6f77ddad10957e9f452ca94a19b853ea8ee892e4f125cd074fe57bba1b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 09:37:32 compute-0 systemd[1]: Started libpod-conmon-b89d6f77ddad10957e9f452ca94a19b853ea8ee892e4f125cd074fe57bba1b25.scope.
Oct  9 09:37:32 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:32 compute-0 podman[33183]: 2025-10-09 09:37:32.996340756 +0000 UTC m=+0.067643577 container init b89d6f77ddad10957e9f452ca94a19b853ea8ee892e4f125cd074fe57bba1b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct  9 09:37:33 compute-0 podman[33183]: 2025-10-09 09:37:33.00120191 +0000 UTC m=+0.072504710 container start b89d6f77ddad10957e9f452ca94a19b853ea8ee892e4f125cd074fe57bba1b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_villani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:33 compute-0 podman[33183]: 2025-10-09 09:37:33.002457457 +0000 UTC m=+0.073760258 container attach b89d6f77ddad10957e9f452ca94a19b853ea8ee892e4f125cd074fe57bba1b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:37:33 compute-0 thirsty_villani[33196]: 167 167
Oct  9 09:37:33 compute-0 systemd[1]: libpod-b89d6f77ddad10957e9f452ca94a19b853ea8ee892e4f125cd074fe57bba1b25.scope: Deactivated successfully.
Oct  9 09:37:33 compute-0 podman[33183]: 2025-10-09 09:37:33.004350526 +0000 UTC m=+0.075653326 container died b89d6f77ddad10957e9f452ca94a19b853ea8ee892e4f125cd074fe57bba1b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7e477d5647d4ba93ea85829eb9940c80e57a113e0da71d3f21de74b139eefdc-merged.mount: Deactivated successfully.
Oct  9 09:37:33 compute-0 podman[33183]: 2025-10-09 09:37:33.025101556 +0000 UTC m=+0.096404358 container remove b89d6f77ddad10957e9f452ca94a19b853ea8ee892e4f125cd074fe57bba1b25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:33 compute-0 podman[33183]: 2025-10-09 09:37:32.944486116 +0000 UTC m=+0.015788937 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:33 compute-0 systemd[1]: libpod-conmon-b89d6f77ddad10957e9f452ca94a19b853ea8ee892e4f125cd074fe57bba1b25.scope: Deactivated successfully.
Oct  9 09:37:33 compute-0 ceph-mon[4497]: Reconfiguring mon.compute-0 (monmap changed)...
Oct  9 09:37:33 compute-0 ceph-mon[4497]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 09:37:33 compute-0 ceph-mon[4497]: Reconfiguring mgr.compute-0.lwqgfy (monmap changed)...
Oct  9 09:37:33 compute-0 ceph-mon[4497]: Reconfiguring daemon mgr.compute-0.lwqgfy on compute-0
Oct  9 09:37:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:37:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  9 09:37:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  9 09:37:33 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  9 09:37:33 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  9 09:37:33 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:33 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b80089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:33 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:33 compute-0 podman[33312]: 2025-10-09 09:37:33.569771223 +0000 UTC m=+0.036943039 container died f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d9ed33f48d992ec68091a556f7859416fcc77245186887b2f1750ed0d73c246-merged.mount: Deactivated successfully.
Oct  9 09:37:33 compute-0 podman[33312]: 2025-10-09 09:37:33.592441142 +0000 UTC m=+0.059612957 container remove f6c5e5aaa66e540d2596b51d05e5f681f364ae1190d47d1f1326559548314a4b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:33 compute-0 bash[33312]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0
Oct  9 09:37:33 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Oct  9 09:37:33 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@node-exporter.compute-0.service: Failed with result 'exit-code'.
Oct  9 09:37:33 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:33 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@node-exporter.compute-0.service: Consumed 1.450s CPU time.
Oct  9 09:37:33 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:33.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:33 compute-0 podman[33397]: 2025-10-09 09:37:33.827635246 +0000 UTC m=+0.028859208 container create 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf085d12406d42c0a8dad40efde87ed9ac0d2c0e73435a3736ede369d4a17d0/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:33 compute-0 podman[33397]: 2025-10-09 09:37:33.862118235 +0000 UTC m=+0.063342217 container init 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:33 compute-0 podman[33397]: 2025-10-09 09:37:33.86624238 +0000 UTC m=+0.067466342 container start 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:33 compute-0 bash[33397]: 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994
Oct  9 09:37:33 compute-0 podman[33397]: 2025-10-09 09:37:33.81463652 +0000 UTC m=+0.015860502 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.870Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.870Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.871Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  9 09:37:33 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=arp
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=bcache
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=bonding
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=cpu
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=dmi
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=edac
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=entropy
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=filefd
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=hwmon
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=netclass
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=netdev
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=netstat
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=nfs
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=nvme
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=os
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=pressure
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=rapl
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=selinux
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=softnet
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=stat
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=textfile
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=time
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=uname
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=xfs
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=node_exporter.go:117 level=info collector=zfs
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct  9 09:37:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0[33409]: ts=2025-10-09T09:37:33.872Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct  9 09:37:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  9 09:37:33 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  9 09:37:33 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  9 09:37:33 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  9 09:37:34 compute-0 ceph-mon[4497]: Reconfiguring crash.compute-0 (monmap changed)...
Oct  9 09:37:34 compute-0 ceph-mon[4497]: Reconfiguring daemon crash.compute-0 on compute-0
Oct  9 09:37:34 compute-0 ceph-mon[4497]: Reconfiguring osd.1 (monmap changed)...
Oct  9 09:37:34 compute-0 ceph-mon[4497]: Reconfiguring daemon osd.1 on compute-0
Oct  9 09:37:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:34 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:34 compute-0 podman[33485]: 2025-10-09 09:37:34.24062469 +0000 UTC m=+0.028909262 volume create cf9b70f7ae0a9c2ea09c6735edaf104d93e74358d97e347e61b61d336d5c98a1
Oct  9 09:37:34 compute-0 podman[33485]: 2025-10-09 09:37:34.247304613 +0000 UTC m=+0.035589185 container create d58d929476b494e006aa7c545f9617ece9901c274c0fa10732dea3130617f128 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_boyd, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:34 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:37:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v28: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct  9 09:37:34 compute-0 systemd[1]: Started libpod-conmon-d58d929476b494e006aa7c545f9617ece9901c274c0fa10732dea3130617f128.scope.
Oct  9 09:37:34 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8bdeefe5b9c9d53ce943daf11330c53f05752d96c87266367e845e2fd17e7e2/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:34 compute-0 podman[33485]: 2025-10-09 09:37:34.313128549 +0000 UTC m=+0.101413131 container init d58d929476b494e006aa7c545f9617ece9901c274c0fa10732dea3130617f128 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_boyd, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 podman[33485]: 2025-10-09 09:37:34.31872986 +0000 UTC m=+0.107014442 container start d58d929476b494e006aa7c545f9617ece9901c274c0fa10732dea3130617f128 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_boyd, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 podman[33485]: 2025-10-09 09:37:34.320186506 +0000 UTC m=+0.108471088 container attach d58d929476b494e006aa7c545f9617ece9901c274c0fa10732dea3130617f128 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_boyd, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 vigorous_boyd[33498]: 65534 65534
Oct  9 09:37:34 compute-0 systemd[1]: libpod-d58d929476b494e006aa7c545f9617ece9901c274c0fa10732dea3130617f128.scope: Deactivated successfully.
Oct  9 09:37:34 compute-0 podman[33485]: 2025-10-09 09:37:34.321439728 +0000 UTC m=+0.109724300 container died d58d929476b494e006aa7c545f9617ece9901c274c0fa10732dea3130617f128 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_boyd, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 podman[33485]: 2025-10-09 09:37:34.230256484 +0000 UTC m=+0.018541076 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 09:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8bdeefe5b9c9d53ce943daf11330c53f05752d96c87266367e845e2fd17e7e2-merged.mount: Deactivated successfully.
Oct  9 09:37:34 compute-0 podman[33485]: 2025-10-09 09:37:34.357121117 +0000 UTC m=+0.145405690 container remove d58d929476b494e006aa7c545f9617ece9901c274c0fa10732dea3130617f128 (image=quay.io/prometheus/alertmanager:v0.25.0, name=vigorous_boyd, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 podman[33485]: 2025-10-09 09:37:34.358384588 +0000 UTC m=+0.146669170 volume remove cf9b70f7ae0a9c2ea09c6735edaf104d93e74358d97e347e61b61d336d5c98a1
Oct  9 09:37:34 compute-0 systemd[1]: libpod-conmon-d58d929476b494e006aa7c545f9617ece9901c274c0fa10732dea3130617f128.scope: Deactivated successfully.
Oct  9 09:37:34 compute-0 podman[33536]: 2025-10-09 09:37:34.399263697 +0000 UTC m=+0.026693565 volume create 8bfb7a7d9497b9fd2507aade9ddc3b4d00cdbbd099da5d30a95a4620ee26394d
Oct  9 09:37:34 compute-0 podman[33536]: 2025-10-09 09:37:34.413364411 +0000 UTC m=+0.040794278 container create 44590a885fe6c8a9731bed0ed1e06bff1457dcaeb153b771e18a2e39e75e5a82 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jovial_chandrasekhar, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 systemd[1]: Started libpod-conmon-44590a885fe6c8a9731bed0ed1e06bff1457dcaeb153b771e18a2e39e75e5a82.scope.
Oct  9 09:37:34 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432c940fc3bb0637e37080f83cd2aef2157bb555a7a822514806a0a72fe8ef62/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:34 compute-0 podman[33536]: 2025-10-09 09:37:34.466176515 +0000 UTC m=+0.093606403 container init 44590a885fe6c8a9731bed0ed1e06bff1457dcaeb153b771e18a2e39e75e5a82 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jovial_chandrasekhar, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 podman[33536]: 2025-10-09 09:37:34.469838138 +0000 UTC m=+0.097267997 container start 44590a885fe6c8a9731bed0ed1e06bff1457dcaeb153b771e18a2e39e75e5a82 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jovial_chandrasekhar, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 jovial_chandrasekhar[33550]: 65534 65534
Oct  9 09:37:34 compute-0 systemd[1]: libpod-44590a885fe6c8a9731bed0ed1e06bff1457dcaeb153b771e18a2e39e75e5a82.scope: Deactivated successfully.
Oct  9 09:37:34 compute-0 podman[33536]: 2025-10-09 09:37:34.471482238 +0000 UTC m=+0.098912106 container attach 44590a885fe6c8a9731bed0ed1e06bff1457dcaeb153b771e18a2e39e75e5a82 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jovial_chandrasekhar, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 podman[33536]: 2025-10-09 09:37:34.47171159 +0000 UTC m=+0.099141699 container died 44590a885fe6c8a9731bed0ed1e06bff1457dcaeb153b771e18a2e39e75e5a82 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jovial_chandrasekhar, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-432c940fc3bb0637e37080f83cd2aef2157bb555a7a822514806a0a72fe8ef62-merged.mount: Deactivated successfully.
Oct  9 09:37:34 compute-0 podman[33536]: 2025-10-09 09:37:34.390198606 +0000 UTC m=+0.017628495 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 09:37:34 compute-0 podman[33536]: 2025-10-09 09:37:34.488916968 +0000 UTC m=+0.116346835 container remove 44590a885fe6c8a9731bed0ed1e06bff1457dcaeb153b771e18a2e39e75e5a82 (image=quay.io/prometheus/alertmanager:v0.25.0, name=jovial_chandrasekhar, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 podman[33536]: 2025-10-09 09:37:34.490241654 +0000 UTC m=+0.117671522 volume remove 8bfb7a7d9497b9fd2507aade9ddc3b4d00cdbbd099da5d30a95a4620ee26394d
Oct  9 09:37:34 compute-0 systemd[1]: libpod-conmon-44590a885fe6c8a9731bed0ed1e06bff1457dcaeb153b771e18a2e39e75e5a82.scope: Deactivated successfully.
Oct  9 09:37:34 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:37:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:37:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:34.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[24823]: ts=2025-10-09T09:37:34.632Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Oct  9 09:37:34 compute-0 podman[33591]: 2025-10-09 09:37:34.643387454 +0000 UTC m=+0.033148634 container died bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 podman[33591]: 2025-10-09 09:37:34.660572473 +0000 UTC m=+0.050333653 container remove bd3cbdfb5f1cb9bb74e2043c48786e84aea19baa506d844adecf836d2e2fa6f1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 podman[33591]: 2025-10-09 09:37:34.661982963 +0000 UTC m=+0.051744142 volume remove 9e67def042e827328b0d7fc63b2a678777c6accad0661d2e3494005ce80ceb8a
Oct  9 09:37:34 compute-0 bash[33591]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0
Oct  9 09:37:34 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@alertmanager.compute-0.service: Deactivated successfully.
Oct  9 09:37:34 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:34 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:34 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Oct  9 09:37:34 compute-0 systemd[1]: session-22.scope: Consumed 6.428s CPU time.
Oct  9 09:37:34 compute-0 systemd-logind[798]: Session 22 logged out. Waiting for processes to exit.
Oct  9 09:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f79b46ae12a440a24b8f0d9c8dd9165d4911897c94bc41aced8452688e26b442-merged.mount: Deactivated successfully.
Oct  9 09:37:34 compute-0 systemd-logind[798]: Removed session 22.
Oct  9 09:37:34 compute-0 podman[33670]: 2025-10-09 09:37:34.900217869 +0000 UTC m=+0.024345218 volume create 2a7ec503592a36a55bbfcdc35fbdf21d19c808b3501980bc047cf7c9baed2b7f
Oct  9 09:37:34 compute-0 podman[33670]: 2025-10-09 09:37:34.907332521 +0000 UTC m=+0.031459881 container create 5c740331e43a547cef58f363bed860d932ba62ab932b4c8a13e2e8dac6839868 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:34 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeae5311875ac22daf71218d1f72fa38273972e43b26c0ebc9c8500ea3f92e68/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeae5311875ac22daf71218d1f72fa38273972e43b26c0ebc9c8500ea3f92e68/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:34 compute-0 podman[33670]: 2025-10-09 09:37:34.945838034 +0000 UTC m=+0.069965394 container init 5c740331e43a547cef58f363bed860d932ba62ab932b4c8a13e2e8dac6839868 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 podman[33670]: 2025-10-09 09:37:34.950180471 +0000 UTC m=+0.074307820 container start 5c740331e43a547cef58f363bed860d932ba62ab932b4c8a13e2e8dac6839868 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:34 compute-0 bash[33670]: 5c740331e43a547cef58f363bed860d932ba62ab932b4c8a13e2e8dac6839868
Oct  9 09:37:34 compute-0 podman[33670]: 2025-10-09 09:37:34.891950552 +0000 UTC m=+0.016077921 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Oct  9 09:37:34 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:37:34.966Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Oct  9 09:37:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:37:34.966Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Oct  9 09:37:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:37:34.973Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.26.64 port=9094
Oct  9 09:37:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:37:34.974Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Oct  9 09:37:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:34 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:34 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:35 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  9 09:37:35 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  9 09:37:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:37:35.001Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  9 09:37:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:37:35.002Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Oct  9 09:37:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:37:35.005Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Oct  9 09:37:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:37:35.005Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Oct  9 09:37:35 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Oct  9 09:37:35 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Oct  9 09:37:35 compute-0 ceph-mon[4497]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  9 09:37:35 compute-0 ceph-mon[4497]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  9 09:37:35 compute-0 ceph-mon[4497]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  9 09:37:35 compute-0 ceph-mon[4497]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  9 09:37:35 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:35 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:35 compute-0 podman[33764]: 2025-10-09 09:37:35.437350777 +0000 UTC m=+0.029224847 container create 6454f411ce852a2912814b53ce9053713f86f1ca131e4b359b345351fb98214c (image=quay.io/ceph/grafana:10.4.0, name=affectionate_archimedes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:35 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:35 compute-0 systemd[1]: Started libpod-conmon-6454f411ce852a2912814b53ce9053713f86f1ca131e4b359b345351fb98214c.scope.
Oct  9 09:37:35 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:35 compute-0 podman[33764]: 2025-10-09 09:37:35.491849391 +0000 UTC m=+0.083723482 container init 6454f411ce852a2912814b53ce9053713f86f1ca131e4b359b345351fb98214c (image=quay.io/ceph/grafana:10.4.0, name=affectionate_archimedes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 podman[33764]: 2025-10-09 09:37:35.497469116 +0000 UTC m=+0.089343176 container start 6454f411ce852a2912814b53ce9053713f86f1ca131e4b359b345351fb98214c (image=quay.io/ceph/grafana:10.4.0, name=affectionate_archimedes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 podman[33764]: 2025-10-09 09:37:35.49880247 +0000 UTC m=+0.090676561 container attach 6454f411ce852a2912814b53ce9053713f86f1ca131e4b359b345351fb98214c (image=quay.io/ceph/grafana:10.4.0, name=affectionate_archimedes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 affectionate_archimedes[33777]: 472 0
Oct  9 09:37:35 compute-0 systemd[1]: libpod-6454f411ce852a2912814b53ce9053713f86f1ca131e4b359b345351fb98214c.scope: Deactivated successfully.
Oct  9 09:37:35 compute-0 podman[33764]: 2025-10-09 09:37:35.500321092 +0000 UTC m=+0.092195173 container died 6454f411ce852a2912814b53ce9053713f86f1ca131e4b359b345351fb98214c (image=quay.io/ceph/grafana:10.4.0, name=affectionate_archimedes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-276a0022de9c895c2ec37cee8821111819425cc85bbcc1561d71d40d5d954203-merged.mount: Deactivated successfully.
Oct  9 09:37:35 compute-0 podman[33764]: 2025-10-09 09:37:35.515547668 +0000 UTC m=+0.107421739 container remove 6454f411ce852a2912814b53ce9053713f86f1ca131e4b359b345351fb98214c (image=quay.io/ceph/grafana:10.4.0, name=affectionate_archimedes, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 podman[33764]: 2025-10-09 09:37:35.424690338 +0000 UTC m=+0.016564429 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 09:37:35 compute-0 systemd[1]: libpod-conmon-6454f411ce852a2912814b53ce9053713f86f1ca131e4b359b345351fb98214c.scope: Deactivated successfully.
Oct  9 09:37:35 compute-0 podman[33789]: 2025-10-09 09:37:35.559374052 +0000 UTC m=+0.029447006 container create 74cdc5332253d6c43742d56c016690c97b59596323bbc1aa52d520360ba444a7 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_swartz, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 systemd[1]: Started libpod-conmon-74cdc5332253d6c43742d56c016690c97b59596323bbc1aa52d520360ba444a7.scope.
Oct  9 09:37:35 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:35 compute-0 podman[33789]: 2025-10-09 09:37:35.607705454 +0000 UTC m=+0.077778408 container init 74cdc5332253d6c43742d56c016690c97b59596323bbc1aa52d520360ba444a7 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_swartz, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 podman[33789]: 2025-10-09 09:37:35.612719442 +0000 UTC m=+0.082792396 container start 74cdc5332253d6c43742d56c016690c97b59596323bbc1aa52d520360ba444a7 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_swartz, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 pedantic_swartz[33805]: 472 0
Oct  9 09:37:35 compute-0 systemd[1]: libpod-74cdc5332253d6c43742d56c016690c97b59596323bbc1aa52d520360ba444a7.scope: Deactivated successfully.
Oct  9 09:37:35 compute-0 podman[33789]: 2025-10-09 09:37:35.61458074 +0000 UTC m=+0.084653694 container attach 74cdc5332253d6c43742d56c016690c97b59596323bbc1aa52d520360ba444a7 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_swartz, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 podman[33789]: 2025-10-09 09:37:35.614720052 +0000 UTC m=+0.084793006 container died 74cdc5332253d6c43742d56c016690c97b59596323bbc1aa52d520360ba444a7 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_swartz, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 podman[33789]: 2025-10-09 09:37:35.630550805 +0000 UTC m=+0.100623759 container remove 74cdc5332253d6c43742d56c016690c97b59596323bbc1aa52d520360ba444a7 (image=quay.io/ceph/grafana:10.4.0, name=pedantic_swartz, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 podman[33789]: 2025-10-09 09:37:35.546545507 +0000 UTC m=+0.016618471 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 09:37:35 compute-0 systemd[1]: libpod-conmon-74cdc5332253d6c43742d56c016690c97b59596323bbc1aa52d520360ba444a7.scope: Deactivated successfully.
Oct  9 09:37:35 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=server t=2025-10-09T09:37:35.778193267Z level=info msg="Shutdown started" reason="System signal: terminated"
Oct  9 09:37:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=tracing t=2025-10-09T09:37:35.778466022Z level=info msg="Closing tracing"
Oct  9 09:37:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=grafana-apiserver t=2025-10-09T09:37:35.778607679Z level=info msg="StorageObjectCountTracker pruner is exiting"
Oct  9 09:37:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=ticker t=2025-10-09T09:37:35.778689383Z level=info msg=stopped last_tick=2025-10-09T09:37:30Z
Oct  9 09:37:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[25314]: logger=sqlstore.transactions t=2025-10-09T09:37:35.789967446Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  9 09:37:35 compute-0 podman[33843]: 2025-10-09 09:37:35.798763202 +0000 UTC m=+0.042301197 container died 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 podman[33843]: 2025-10-09 09:37:35.820594863 +0000 UTC m=+0.064132858 container remove 80f41780a224394d2e72978ad05b417bbf3d1eeac5620f866d5082d3b8450db5 (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:35 compute-0 bash[33843]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0
Oct  9 09:37:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:35.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8739ee1d9b72f32f352d5bdbbda27adfce027f49325760051334445191a363c3-merged.mount: Deactivated successfully.
Oct  9 09:37:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-256bc183ed1e9ebb8565c258ec613ce5f7bf4760464ea9bf1cfca84b22ee1758-merged.mount: Deactivated successfully.
Oct  9 09:37:35 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@grafana.compute-0.service: Deactivated successfully.
Oct  9 09:37:35 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:35 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@grafana.compute-0.service: Consumed 3.236s CPU time.
Oct  9 09:37:35 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:36 compute-0 podman[33924]: 2025-10-09 09:37:36.070262151 +0000 UTC m=+0.033175008 container create d505ba96f4f8073a145fdc67466363156d038071ebcd8a8aeed53305dbe3584a (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:36 compute-0 ceph-mon[4497]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  9 09:37:36 compute-0 ceph-mon[4497]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:36 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64fa68fe746aa82fcffe4d56a088ae18daeccc04125100f01762099af6b624/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64fa68fe746aa82fcffe4d56a088ae18daeccc04125100f01762099af6b624/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64fa68fe746aa82fcffe4d56a088ae18daeccc04125100f01762099af6b624/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64fa68fe746aa82fcffe4d56a088ae18daeccc04125100f01762099af6b624/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b64fa68fe746aa82fcffe4d56a088ae18daeccc04125100f01762099af6b624/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:36 compute-0 podman[33924]: 2025-10-09 09:37:36.115125667 +0000 UTC m=+0.078038524 container init d505ba96f4f8073a145fdc67466363156d038071ebcd8a8aeed53305dbe3584a (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:36 compute-0 podman[33924]: 2025-10-09 09:37:36.121207067 +0000 UTC m=+0.084119914 container start d505ba96f4f8073a145fdc67466363156d038071ebcd8a8aeed53305dbe3584a (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:36 compute-0 bash[33924]: d505ba96f4f8073a145fdc67466363156d038071ebcd8a8aeed53305dbe3584a
Oct  9 09:37:36 compute-0 podman[33924]: 2025-10-09 09:37:36.056961707 +0000 UTC m=+0.019874584 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Oct  9 09:37:36 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:36 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Oct  9 09:37:36 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Oct  9 09:37:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Oct  9 09:37:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:37:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:36 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:36 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Oct  9 09:37:36 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.256860266Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-10-09T09:37:36Z
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257067486Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.25708042Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257085039Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257089147Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257092433Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257095629Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257098534Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257102421Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257105648Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257108633Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257111439Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257114675Z level=info msg=Target target=[all]
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257119845Z level=info msg="Path Home" path=/usr/share/grafana
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257122791Z level=info msg="Path Data" path=/var/lib/grafana
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257125496Z level=info msg="Path Logs" path=/var/log/grafana
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257128231Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257131146Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=settings t=2025-10-09T09:37:36.257134122Z level=info msg="App mode production"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=sqlstore t=2025-10-09T09:37:36.257395333Z level=info msg="Connecting to DB" dbtype=sqlite3
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=sqlstore t=2025-10-09T09:37:36.25741476Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=migrator t=2025-10-09T09:37:36.257915634Z level=info msg="Starting DB migrations"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=migrator t=2025-10-09T09:37:36.270955588Z level=info msg="migrations completed" performed=0 skipped=547 duration=432.034µs
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=sqlstore t=2025-10-09T09:37:36.271817252Z level=info msg="Created default organization"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=secrets t=2025-10-09T09:37:36.272309009Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Oct  9 09:37:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v29: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=plugin.store t=2025-10-09T09:37:36.285043928Z level=info msg="Loading plugins..."
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=local.finder t=2025-10-09T09:37:36.343677392Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=plugin.store t=2025-10-09T09:37:36.343695637Z level=info msg="Plugins loaded" count=55 duration=58.652531ms
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=query_data t=2025-10-09T09:37:36.355294665Z level=info msg="Query Service initialization"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=live.push_http t=2025-10-09T09:37:36.357496324Z level=info msg="Live Push Gateway initialization"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=ngalert.migration t=2025-10-09T09:37:36.359255479Z level=info msg=Starting
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=ngalert.state.manager t=2025-10-09T09:37:36.366075982Z level=info msg="Running in alternative execution of Error/NoData mode"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=infra.usagestats.collector t=2025-10-09T09:37:36.367455322Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=provisioning.datasources t=2025-10-09T09:37:36.369329513Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=provisioning.alerting t=2025-10-09T09:37:36.386308521Z level=info msg="starting to provision alerting"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=provisioning.alerting t=2025-10-09T09:37:36.386359827Z level=info msg="finished to provision alerting"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=ngalert.state.manager t=2025-10-09T09:37:36.386667898Z level=info msg="Warming state cache for startup"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=ngalert.state.manager t=2025-10-09T09:37:36.387043125Z level=info msg="State cache has been initialized" states=0 duration=374.397µs
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=ngalert.multiorg.alertmanager t=2025-10-09T09:37:36.387234075Z level=info msg="Starting MultiOrg Alertmanager"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=ngalert.scheduler t=2025-10-09T09:37:36.387255886Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=ticker t=2025-10-09T09:37:36.387291763Z level=info msg=starting first_tick=2025-10-09T09:37:40Z
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=provisioning.dashboard t=2025-10-09T09:37:36.387988096Z level=info msg="starting to provision dashboards"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=http.server t=2025-10-09T09:37:36.388262222Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=http.server t=2025-10-09T09:37:36.388519408Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=grafanaStorageLogger t=2025-10-09T09:37:36.398613991Z level=info msg="Storage starting"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=provisioning.dashboard t=2025-10-09T09:37:36.41272381Z level=info msg="finished to provision dashboards"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=grafana.update.checker t=2025-10-09T09:37:36.444288514Z level=info msg="Update check succeeded" duration=56.963847ms
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=plugins.update.checker t=2025-10-09T09:37:36.448201498Z level=info msg="Update check succeeded" duration=60.407438ms
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=grafana-apiserver t=2025-10-09T09:37:36.549103989Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=grafana-apiserver t=2025-10-09T09:37:36.54988485Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Oct  9 09:37:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:37:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:37:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:36 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Oct  9 09:37:36 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Oct  9 09:37:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Oct  9 09:37:36 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  9 09:37:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:36 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:36 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Oct  9 09:37:36 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Oct  9 09:37:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:36.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:36 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac006430 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:37:36.975Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000967391s
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Oct  9 09:37:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Oct  9 09:37:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Oct  9 09:37:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-0 ceph-mon[4497]: Reconfiguring crash.compute-1 (monmap changed)...
Oct  9 09:37:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:37:37 compute-0 ceph-mon[4497]: Reconfiguring daemon crash.compute-1 on compute-1
Oct  9 09:37:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-0 ceph-mon[4497]: Reconfiguring osd.0 (monmap changed)...
Oct  9 09:37:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  9 09:37:37 compute-0 ceph-mon[4497]: Reconfiguring daemon osd.0 on compute-1
Oct  9 09:37:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:37 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b80096e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Oct  9 09:37:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:37 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Oct  9 09:37:37 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Oct  9 09:37:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:37.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:37:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:37:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.takdnm (monmap changed)...
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.takdnm (monmap changed)...
Oct  9 09:37:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Oct  9 09:37:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 09:37:38 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:38 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.takdnm on compute-2
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.takdnm on compute-2
Oct  9 09:37:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:38 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v30: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct  9 09:37:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:37:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:37:38 compute-0 ceph-mon[4497]: Reconfiguring mon.compute-1 (monmap changed)...
Oct  9 09:37:38 compute-0 ceph-mon[4497]: Reconfiguring daemon mon.compute-1 on compute-1
Oct  9 09:37:38 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-0 ceph-mon[4497]: Reconfiguring mon.compute-2 (monmap changed)...
Oct  9 09:37:38 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mon[4497]: Reconfiguring daemon mon.compute-2 on compute-2
Oct  9 09:37:38 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-0 ceph-mon[4497]: Reconfiguring mgr.compute-2.takdnm (monmap changed)...
Oct  9 09:37:38 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mon[4497]: Reconfiguring daemon mgr.compute-2.takdnm on compute-2
Oct  9 09:37:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Oct  9 09:37:38 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Oct  9 09:37:38 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Oct  9 09:37:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  9 09:37:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Oct  9 09:37:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: [09/Oct/2025:09:37:38] ENGINE Bus STOPPING
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: [prometheus INFO root] Restarting engine...
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:37:38] ENGINE Bus STOPPING
Oct  9 09:37:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:38.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: [09/Oct/2025:09:37:38] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct  9 09:37:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: [09/Oct/2025:09:37:38] ENGINE Bus STOPPED
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:37:38] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:37:38] ENGINE Bus STOPPED
Oct  9 09:37:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: [09/Oct/2025:09:37:38] ENGINE Bus STARTING
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:37:38] ENGINE Bus STARTING
Oct  9 09:37:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: [09/Oct/2025:09:37:38] ENGINE Serving on http://:::9283
Oct  9 09:37:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: [09/Oct/2025:09:37:38] ENGINE Bus STARTED
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:37:38] ENGINE Serving on http://:::9283
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:37:38] ENGINE Bus STARTED
Oct  9 09:37:38 compute-0 ceph-mgr[4772]: [prometheus INFO root] Engine started.
Oct  9 09:37:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:38 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:39 compute-0 podman[34073]: 2025-10-09 09:37:39.052479599 +0000 UTC m=+0.036447517 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:37:39 compute-0 podman[34073]: 2025-10-09 09:37:39.130523714 +0000 UTC m=+0.114491631 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:37:39 compute-0 podman[34166]: 2025-10-09 09:37:39.425198469 +0000 UTC m=+0.033223180 container exec 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:39 compute-0 podman[34166]: 2025-10-09 09:37:39.434356977 +0000 UTC m=+0.042381670 container exec_died 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:39 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:39 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:39 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:39 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  9 09:37:39 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:39 compute-0 podman[34251]: 2025-10-09 09:37:39.673006211 +0000 UTC m=+0.032095084 container exec 5c740331e43a547cef58f363bed860d932ba62ab932b4c8a13e2e8dac6839868 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:39 compute-0 podman[34251]: 2025-10-09 09:37:39.695292961 +0000 UTC m=+0.054381813 container exec_died 5c740331e43a547cef58f363bed860d932ba62ab932b4c8a13e2e8dac6839868 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:39 compute-0 podman[34305]: 2025-10-09 09:37:39.832489338 +0000 UTC m=+0.031808945 container exec d505ba96f4f8073a145fdc67466363156d038071ebcd8a8aeed53305dbe3584a (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:39.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:39 compute-0 podman[34305]: 2025-10-09 09:37:39.948332444 +0000 UTC m=+0.147652041 container exec_died d505ba96f4f8073a145fdc67466363156d038071ebcd8a8aeed53305dbe3584a (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:40 compute-0 podman[34363]: 2025-10-09 09:37:40.08954048 +0000 UTC m=+0.033323420 container exec 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:37:40 compute-0 podman[34363]: 2025-10-09 09:37:40.095312979 +0000 UTC m=+0.039095917 container exec_died 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:37:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:40 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b80096e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:40 compute-0 podman[34415]: 2025-10-09 09:37:40.22990627 +0000 UTC m=+0.030753376 container exec 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, architecture=x86_64, name=keepalived)
Oct  9 09:37:40 compute-0 podman[34415]: 2025-10-09 09:37:40.243306462 +0000 UTC m=+0.044153559 container exec_died 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, name=keepalived, io.openshift.expose-services=, io.buildah.version=1.28.2, release=1793, version=2.2.4, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, build-date=2023-02-22T09:23:20)
Oct  9 09:37:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v31: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct  9 09:37:40 compute-0 podman[34465]: 2025-10-09 09:37:40.380870061 +0000 UTC m=+0.033011852 container exec ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:40 compute-0 podman[34465]: 2025-10-09 09:37:40.401395 +0000 UTC m=+0.053536791 container exec_died ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:40 compute-0 podman[34510]: 2025-10-09 09:37:40.50683933 +0000 UTC m=+0.032167190 container exec ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:37:40 compute-0 podman[34510]: 2025-10-09 09:37:40.518361344 +0000 UTC m=+0.043689184 container exec_died ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:40.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:40 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:37:40 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:37:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [WARNING] 281/093740 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:37:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:40 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:41 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:41 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:41 compute-0 podman[34648]: 2025-10-09 09:37:41.046556657 +0000 UTC m=+0.026572137 container create 5082862fffb3ce505817ca4feeeac8e0aeb18e83fa68cd0b632b4e899b3283a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 09:37:41 compute-0 systemd[1]: Started libpod-conmon-5082862fffb3ce505817ca4feeeac8e0aeb18e83fa68cd0b632b4e899b3283a3.scope.
Oct  9 09:37:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:41 compute-0 podman[34648]: 2025-10-09 09:37:41.101980624 +0000 UTC m=+0.081996103 container init 5082862fffb3ce505817ca4feeeac8e0aeb18e83fa68cd0b632b4e899b3283a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 09:37:41 compute-0 podman[34648]: 2025-10-09 09:37:41.106394371 +0000 UTC m=+0.086409850 container start 5082862fffb3ce505817ca4feeeac8e0aeb18e83fa68cd0b632b4e899b3283a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 09:37:41 compute-0 pensive_brattain[34662]: 167 167
Oct  9 09:37:41 compute-0 podman[34648]: 2025-10-09 09:37:41.109320757 +0000 UTC m=+0.089336235 container attach 5082862fffb3ce505817ca4feeeac8e0aeb18e83fa68cd0b632b4e899b3283a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 09:37:41 compute-0 systemd[1]: libpod-5082862fffb3ce505817ca4feeeac8e0aeb18e83fa68cd0b632b4e899b3283a3.scope: Deactivated successfully.
Oct  9 09:37:41 compute-0 podman[34648]: 2025-10-09 09:37:41.110110945 +0000 UTC m=+0.090126424 container died 5082862fffb3ce505817ca4feeeac8e0aeb18e83fa68cd0b632b4e899b3283a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 09:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-54cd26f90850c800dec8f9c143161c6b723df107626ea2f4339d2a0c56121a65-merged.mount: Deactivated successfully.
Oct  9 09:37:41 compute-0 podman[34648]: 2025-10-09 09:37:41.130562606 +0000 UTC m=+0.110578086 container remove 5082862fffb3ce505817ca4feeeac8e0aeb18e83fa68cd0b632b4e899b3283a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 09:37:41 compute-0 podman[34648]: 2025-10-09 09:37:41.034786937 +0000 UTC m=+0.014802436 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:41 compute-0 systemd[1]: libpod-conmon-5082862fffb3ce505817ca4feeeac8e0aeb18e83fa68cd0b632b4e899b3283a3.scope: Deactivated successfully.
Oct  9 09:37:41 compute-0 podman[34684]: 2025-10-09 09:37:41.238202343 +0000 UTC m=+0.026499779 container create 367992a6222b91a1bdbb47bceb61d79ab93779a0e5dc88191a89c3e1b4d1f1e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_black, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:41 compute-0 systemd[1]: Started libpod-conmon-367992a6222b91a1bdbb47bceb61d79ab93779a0e5dc88191a89c3e1b4d1f1e9.scope.
Oct  9 09:37:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3966b8cb0a9aa89fab4bfc4cbae4dc303fcc8a6d03b77d13ec4545bbacaac0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3966b8cb0a9aa89fab4bfc4cbae4dc303fcc8a6d03b77d13ec4545bbacaac0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3966b8cb0a9aa89fab4bfc4cbae4dc303fcc8a6d03b77d13ec4545bbacaac0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3966b8cb0a9aa89fab4bfc4cbae4dc303fcc8a6d03b77d13ec4545bbacaac0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3966b8cb0a9aa89fab4bfc4cbae4dc303fcc8a6d03b77d13ec4545bbacaac0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:41 compute-0 podman[34684]: 2025-10-09 09:37:41.302271783 +0000 UTC m=+0.090569229 container init 367992a6222b91a1bdbb47bceb61d79ab93779a0e5dc88191a89c3e1b4d1f1e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_black, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:37:41 compute-0 podman[34684]: 2025-10-09 09:37:41.307998074 +0000 UTC m=+0.096295520 container start 367992a6222b91a1bdbb47bceb61d79ab93779a0e5dc88191a89c3e1b4d1f1e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_black, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:37:41 compute-0 podman[34684]: 2025-10-09 09:37:41.310186918 +0000 UTC m=+0.098484375 container attach 367992a6222b91a1bdbb47bceb61d79ab93779a0e5dc88191a89c3e1b4d1f1e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 09:37:41 compute-0 podman[34684]: 2025-10-09 09:37:41.227692237 +0000 UTC m=+0.015989693 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:41 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:41 compute-0 amazing_black[34697]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:37:41 compute-0 amazing_black[34697]: --> All data devices are unavailable
Oct  9 09:37:41 compute-0 systemd[1]: libpod-367992a6222b91a1bdbb47bceb61d79ab93779a0e5dc88191a89c3e1b4d1f1e9.scope: Deactivated successfully.
Oct  9 09:37:41 compute-0 podman[34684]: 2025-10-09 09:37:41.565452567 +0000 UTC m=+0.353750013 container died 367992a6222b91a1bdbb47bceb61d79ab93779a0e5dc88191a89c3e1b4d1f1e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_black, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 09:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc3966b8cb0a9aa89fab4bfc4cbae4dc303fcc8a6d03b77d13ec4545bbacaac0-merged.mount: Deactivated successfully.
Oct  9 09:37:41 compute-0 podman[34684]: 2025-10-09 09:37:41.588024543 +0000 UTC m=+0.376321979 container remove 367992a6222b91a1bdbb47bceb61d79ab93779a0e5dc88191a89c3e1b4d1f1e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_black, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:41 compute-0 systemd[1]: libpod-conmon-367992a6222b91a1bdbb47bceb61d79ab93779a0e5dc88191a89c3e1b4d1f1e9.scope: Deactivated successfully.
Oct  9 09:37:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:41.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:41 compute-0 podman[34801]: 2025-10-09 09:37:41.986443795 +0000 UTC m=+0.031421726 container create d4a96e92d0a0a43cded3c43f3d8803d37807ed27d2019a6b5332552706e1bfc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 09:37:42 compute-0 systemd[1]: Started libpod-conmon-d4a96e92d0a0a43cded3c43f3d8803d37807ed27d2019a6b5332552706e1bfc3.scope.
Oct  9 09:37:42 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:42 compute-0 podman[34801]: 2025-10-09 09:37:42.038351154 +0000 UTC m=+0.083329084 container init d4a96e92d0a0a43cded3c43f3d8803d37807ed27d2019a6b5332552706e1bfc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:42 compute-0 podman[34801]: 2025-10-09 09:37:42.043730802 +0000 UTC m=+0.088708733 container start d4a96e92d0a0a43cded3c43f3d8803d37807ed27d2019a6b5332552706e1bfc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:42 compute-0 sleepy_brahmagupta[34816]: 167 167
Oct  9 09:37:42 compute-0 systemd[1]: libpod-d4a96e92d0a0a43cded3c43f3d8803d37807ed27d2019a6b5332552706e1bfc3.scope: Deactivated successfully.
Oct  9 09:37:42 compute-0 podman[34801]: 2025-10-09 09:37:42.047400468 +0000 UTC m=+0.092378399 container attach d4a96e92d0a0a43cded3c43f3d8803d37807ed27d2019a6b5332552706e1bfc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:37:42 compute-0 podman[34801]: 2025-10-09 09:37:42.047744716 +0000 UTC m=+0.092722647 container died d4a96e92d0a0a43cded3c43f3d8803d37807ed27d2019a6b5332552706e1bfc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 09:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8253594f4ffe55ee0ec8427de67d385249e40d1226c2d2f350c03cb8b7bdccdd-merged.mount: Deactivated successfully.
Oct  9 09:37:42 compute-0 podman[34801]: 2025-10-09 09:37:42.065071739 +0000 UTC m=+0.110049670 container remove d4a96e92d0a0a43cded3c43f3d8803d37807ed27d2019a6b5332552706e1bfc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:42 compute-0 podman[34801]: 2025-10-09 09:37:41.970646815 +0000 UTC m=+0.015624766 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:42 compute-0 systemd[1]: libpod-conmon-d4a96e92d0a0a43cded3c43f3d8803d37807ed27d2019a6b5332552706e1bfc3.scope: Deactivated successfully.
Oct  9 09:37:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:42 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005560 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:42 compute-0 podman[34837]: 2025-10-09 09:37:42.175622171 +0000 UTC m=+0.028557487 container create dfe6d33d57c67e3b54137a998a301a61aeac26fab4a42b5239f93f3d7b6bf373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 09:37:42 compute-0 systemd[1]: Started libpod-conmon-dfe6d33d57c67e3b54137a998a301a61aeac26fab4a42b5239f93f3d7b6bf373.scope.
Oct  9 09:37:42 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9bb3e8529c23648db5a7ebc64f52afa61cfd4014c3cbaa180daeebe4cab828/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9bb3e8529c23648db5a7ebc64f52afa61cfd4014c3cbaa180daeebe4cab828/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9bb3e8529c23648db5a7ebc64f52afa61cfd4014c3cbaa180daeebe4cab828/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9bb3e8529c23648db5a7ebc64f52afa61cfd4014c3cbaa180daeebe4cab828/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:42 compute-0 podman[34837]: 2025-10-09 09:37:42.228606039 +0000 UTC m=+0.081541355 container init dfe6d33d57c67e3b54137a998a301a61aeac26fab4a42b5239f93f3d7b6bf373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:42 compute-0 podman[34837]: 2025-10-09 09:37:42.238999316 +0000 UTC m=+0.091934621 container start dfe6d33d57c67e3b54137a998a301a61aeac26fab4a42b5239f93f3d7b6bf373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 09:37:42 compute-0 podman[34837]: 2025-10-09 09:37:42.240546 +0000 UTC m=+0.093481326 container attach dfe6d33d57c67e3b54137a998a301a61aeac26fab4a42b5239f93f3d7b6bf373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:42 compute-0 podman[34837]: 2025-10-09 09:37:42.163373047 +0000 UTC m=+0.016308373 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v32: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct  9 09:37:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:42] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Oct  9 09:37:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:42] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Oct  9 09:37:42 compute-0 amazing_newton[34850]: {
Oct  9 09:37:42 compute-0 amazing_newton[34850]:    "1": [
Oct  9 09:37:42 compute-0 amazing_newton[34850]:        {
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "devices": [
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "/dev/loop3"
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            ],
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "lv_name": "ceph_lv0",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "lv_size": "21470642176",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "name": "ceph_lv0",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "tags": {
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.cluster_name": "ceph",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.crush_device_class": "",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.encrypted": "0",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.osd_id": "1",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.type": "block",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.vdo": "0",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:                "ceph.with_tpm": "0"
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            },
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "type": "block",
Oct  9 09:37:42 compute-0 amazing_newton[34850]:            "vg_name": "ceph_vg0"
Oct  9 09:37:42 compute-0 amazing_newton[34850]:        }
Oct  9 09:37:42 compute-0 amazing_newton[34850]:    ]
Oct  9 09:37:42 compute-0 amazing_newton[34850]: }
Oct  9 09:37:42 compute-0 systemd[1]: libpod-dfe6d33d57c67e3b54137a998a301a61aeac26fab4a42b5239f93f3d7b6bf373.scope: Deactivated successfully.
Oct  9 09:37:42 compute-0 podman[34861]: 2025-10-09 09:37:42.524236254 +0000 UTC m=+0.017685278 container died dfe6d33d57c67e3b54137a998a301a61aeac26fab4a42b5239f93f3d7b6bf373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 09:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e9bb3e8529c23648db5a7ebc64f52afa61cfd4014c3cbaa180daeebe4cab828-merged.mount: Deactivated successfully.
Oct  9 09:37:42 compute-0 podman[34861]: 2025-10-09 09:37:42.550011819 +0000 UTC m=+0.043460832 container remove dfe6d33d57c67e3b54137a998a301a61aeac26fab4a42b5239f93f3d7b6bf373 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_newton, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:37:42 compute-0 systemd[1]: libpod-conmon-dfe6d33d57c67e3b54137a998a301a61aeac26fab4a42b5239f93f3d7b6bf373.scope: Deactivated successfully.
Oct  9 09:37:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:42.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:42 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b800a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:42 compute-0 podman[34954]: 2025-10-09 09:37:42.961847974 +0000 UTC m=+0.028483900 container create 21eb036d2d8e63a52cf735a251d412e508fa4512be070754f10476c59f2100c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shirley, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:37:42 compute-0 systemd[1]: Started libpod-conmon-21eb036d2d8e63a52cf735a251d412e508fa4512be070754f10476c59f2100c9.scope.
Oct  9 09:37:43 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:43 compute-0 podman[34954]: 2025-10-09 09:37:43.01382275 +0000 UTC m=+0.080458686 container init 21eb036d2d8e63a52cf735a251d412e508fa4512be070754f10476c59f2100c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:43 compute-0 podman[34954]: 2025-10-09 09:37:43.018393193 +0000 UTC m=+0.085029109 container start 21eb036d2d8e63a52cf735a251d412e508fa4512be070754f10476c59f2100c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:43 compute-0 podman[34954]: 2025-10-09 09:37:43.019429045 +0000 UTC m=+0.086064962 container attach 21eb036d2d8e63a52cf735a251d412e508fa4512be070754f10476c59f2100c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shirley, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:43 compute-0 distracted_shirley[34968]: 167 167
Oct  9 09:37:43 compute-0 systemd[1]: libpod-21eb036d2d8e63a52cf735a251d412e508fa4512be070754f10476c59f2100c9.scope: Deactivated successfully.
Oct  9 09:37:43 compute-0 podman[34954]: 2025-10-09 09:37:43.022059242 +0000 UTC m=+0.088695158 container died 21eb036d2d8e63a52cf735a251d412e508fa4512be070754f10476c59f2100c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shirley, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc45ce05f8e760d360104c67e396873b56923d0da9d57fb2e9f977c9fd680b35-merged.mount: Deactivated successfully.
Oct  9 09:37:43 compute-0 podman[34954]: 2025-10-09 09:37:43.042453634 +0000 UTC m=+0.109089551 container remove 21eb036d2d8e63a52cf735a251d412e508fa4512be070754f10476c59f2100c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:43 compute-0 podman[34954]: 2025-10-09 09:37:42.950101407 +0000 UTC m=+0.016737343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:43 compute-0 systemd[1]: libpod-conmon-21eb036d2d8e63a52cf735a251d412e508fa4512be070754f10476c59f2100c9.scope: Deactivated successfully.
Oct  9 09:37:43 compute-0 podman[34989]: 2025-10-09 09:37:43.168302707 +0000 UTC m=+0.035314631 container create d29421101dc29caaf5475ddb1ec55b9b1dac0921f9dfbb30305fb54c32680180 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 09:37:43 compute-0 systemd[1]: Started libpod-conmon-d29421101dc29caaf5475ddb1ec55b9b1dac0921f9dfbb30305fb54c32680180.scope.
Oct  9 09:37:43 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7993cf1e05967b5b64f71a7adc1bbbe2ff597f4ad5caeefc9e996c5b06b755cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7993cf1e05967b5b64f71a7adc1bbbe2ff597f4ad5caeefc9e996c5b06b755cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7993cf1e05967b5b64f71a7adc1bbbe2ff597f4ad5caeefc9e996c5b06b755cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7993cf1e05967b5b64f71a7adc1bbbe2ff597f4ad5caeefc9e996c5b06b755cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:43 compute-0 podman[34989]: 2025-10-09 09:37:43.229366622 +0000 UTC m=+0.096378526 container init d29421101dc29caaf5475ddb1ec55b9b1dac0921f9dfbb30305fb54c32680180 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:37:43 compute-0 podman[34989]: 2025-10-09 09:37:43.234732083 +0000 UTC m=+0.101743997 container start d29421101dc29caaf5475ddb1ec55b9b1dac0921f9dfbb30305fb54c32680180 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:43 compute-0 podman[34989]: 2025-10-09 09:37:43.236130378 +0000 UTC m=+0.103142293 container attach d29421101dc29caaf5475ddb1ec55b9b1dac0921f9dfbb30305fb54c32680180 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:43 compute-0 podman[34989]: 2025-10-09 09:37:43.155970357 +0000 UTC m=+0.022982271 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:43 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:43 compute-0 lvm[35078]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:37:43 compute-0 lvm[35078]: VG ceph_vg0 finished
Oct  9 09:37:43 compute-0 heuristic_montalcini[35003]: {}
Oct  9 09:37:43 compute-0 lvm[35081]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:37:43 compute-0 lvm[35081]: VG ceph_vg0 finished
Oct  9 09:37:43 compute-0 systemd[1]: libpod-d29421101dc29caaf5475ddb1ec55b9b1dac0921f9dfbb30305fb54c32680180.scope: Deactivated successfully.
Oct  9 09:37:43 compute-0 podman[34989]: 2025-10-09 09:37:43.742359907 +0000 UTC m=+0.609371821 container died d29421101dc29caaf5475ddb1ec55b9b1dac0921f9dfbb30305fb54c32680180 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7993cf1e05967b5b64f71a7adc1bbbe2ff597f4ad5caeefc9e996c5b06b755cb-merged.mount: Deactivated successfully.
Oct  9 09:37:43 compute-0 podman[34989]: 2025-10-09 09:37:43.76412269 +0000 UTC m=+0.631134603 container remove d29421101dc29caaf5475ddb1ec55b9b1dac0921f9dfbb30305fb54c32680180 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:43 compute-0 systemd[1]: libpod-conmon-d29421101dc29caaf5475ddb1ec55b9b1dac0921f9dfbb30305fb54c32680180.scope: Deactivated successfully.
Oct  9 09:37:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:37:43 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:37:43 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.816229) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663816255, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1334, "num_deletes": 251, "total_data_size": 2893595, "memory_usage": 2933096, "flush_reason": "Manual Compaction"}
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663822840, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 2428875, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6180, "largest_seqno": 7513, "table_properties": {"data_size": 2422768, "index_size": 3178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14731, "raw_average_key_size": 20, "raw_value_size": 2409516, "raw_average_value_size": 3318, "num_data_blocks": 147, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002626, "oldest_key_time": 1760002626, "file_creation_time": 1760002663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6799 microseconds, and 4117 cpu microseconds.
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823026) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 2428875 bytes OK
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823114) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823537) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823550) EVENT_LOG_v1 {"time_micros": 1760002663823547, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823561) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2887234, prev total WAL file size 2887234, number of live WAL files 2.
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824616) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(2371KB)], [20(11MB)]
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663824651, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 14754785, "oldest_snapshot_seqno": -1}
Oct  9 09:37:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:37:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:43.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2696 keys, 13381989 bytes, temperature: kUnknown
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663849642, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 13381989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13359814, "index_size": 14322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6789, "raw_key_size": 68495, "raw_average_key_size": 25, "raw_value_size": 13305763, "raw_average_value_size": 4935, "num_data_blocks": 634, "num_entries": 2696, "num_filter_entries": 2696, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760002663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.849816) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13381989 bytes
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.854016) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 589.2 rd, 534.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(11.6) write-amplify(5.5) OK, records in: 3222, records dropped: 526 output_compression: NoCompression
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.854032) EVENT_LOG_v1 {"time_micros": 1760002663854024, "job": 6, "event": "compaction_finished", "compaction_time_micros": 25042, "compaction_time_cpu_micros": 18192, "output_level": 6, "num_output_files": 1, "total_output_size": 13381989, "num_input_records": 3222, "num_output_records": 2696, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663854448, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663857087, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:44 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:44 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:44 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v33: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct  9 09:37:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:37:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:44.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:37:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:44 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b4005e80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:37:44.977Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002935048s
Oct  9 09:37:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:45 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b800a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:45.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:46 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v34: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct  9 09:37:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:46.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:46 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:47 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c4002600 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:47.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:48 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b800a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v35: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  9 09:37:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:48.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [WARNING] 281/093748 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:37:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:48 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:49 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:37:49
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes', 'images', 'default.rgw.control', 'default.rgw.meta', '.nfs', 'backups', 'default.rgw.log']
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Oct  9 09:37:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Oct  9 09:37:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:37:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:37:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:37:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:49.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct  9 09:37:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct  9 09:37:50 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct  9 09:37:50 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev edce8db7-27fc-4d89-9900-cf12832dc773 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  9 09:37:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Oct  9 09:37:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:50 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:50 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c4003140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v37: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 B/s wr, 0 op/s
Oct  9 09:37:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 09:37:50 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:50 compute-0 systemd-logind[798]: New session 23 of user zuul.
Oct  9 09:37:50 compute-0 systemd[1]: Started Session 23 of User zuul.
Oct  9 09:37:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:50.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:50 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b800a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct  9 09:37:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct  9 09:37:51 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct  9 09:37:51 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 38d69e7a-0cc2-4efd-a67c-ddefa6a30f22 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  9 09:37:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Oct  9 09:37:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:51 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:51 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:51 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:51 compute-0 python3.9[35305]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  9 09:37:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:51 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:51.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct  9 09:37:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct  9 09:37:52 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct  9 09:37:52 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev e0d77969-49a5-4c42-b179-9d6e4fa6d7c4 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  9 09:37:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Oct  9 09:37:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  9 09:37:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:52 compute-0 python3.9[35479]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:37:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:52 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v40: 74 pgs: 31 unknown, 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:37:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 09:37:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 09:37:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:52] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Oct  9 09:37:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:37:52] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Oct  9 09:37:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:52.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:52 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c4003140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct  9 09:37:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  9 09:37:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct  9 09:37:53 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 47 pg[4.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=47 pruub=9.439827919s) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active pruub 192.019622803s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:37:53 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct  9 09:37:53 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 7d5c1714-423b-4a55-93c5-a89fd0b748e7 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct  9 09:37:53 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 47 pg[4.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=47 pruub=9.439827919s) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown pruub 192.019622803s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Oct  9 09:37:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  9 09:37:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  9 09:37:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:53 compute-0 python3.9[35637]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:37:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:53 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b800a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:53.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct  9 09:37:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct  9 09:37:54 compute-0 python3.9[35790]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:37:54 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct  9 09:37:54 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev dbac6a0a-43ee-46d3-8d9b-b6e8ada172d6 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  9 09:37:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Oct  9 09:37:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1e( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1a( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1d( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1b( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.17( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.16( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.15( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.14( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.13( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.12( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.11( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.f( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.c( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1c( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.2( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.18( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.19( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.4( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.7( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.8( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1f( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.d( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.3( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.5( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.6( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.a( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.b( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.e( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.10( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.9( empty local-lis/les=12/13 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1d( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.17( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.16( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.15( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.12( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.11( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.f( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.c( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.0( empty local-lis/les=47/48 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.4( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.19( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.7( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1f( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.3( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.6( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.b( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.10( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 48 pg[4.1e( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=12/12 les/c/f=13/13/0 sis=47) [1] r=0 lpr=47 pi=[12,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:54 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v43: 136 pgs: 93 unknown, 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:37:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 09:37:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Oct  9 09:37:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  9 09:37:54 compute-0 ceph-mgr[4772]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Oct  9 09:37:54 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct  9 09:37:54 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct  9 09:37:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:54.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:54 compute-0 python3.9[35946]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:37:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:54 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct  9 09:37:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  9 09:37:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct  9 09:37:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct  9 09:37:55 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 38ae45fd-6e98-48bc-a700-768f78b86a57 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  9 09:37:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Oct  9 09:37:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  9 09:37:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  9 09:37:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:55 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 49 pg[6.0( v 41'42 (0'0,41'42] local-lis/les=14/15 n=22 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=49 pruub=9.307446480s) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 41'41 mlcod 41'41 active pruub 194.040603638s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:37:55 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 49 pg[6.0( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=1 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=49 pruub=9.307446480s) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 41'41 mlcod 0'0 unknown pruub 194.040603638s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [WARNING] 281/093755 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:37:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:55 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c4003140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:55 compute-0 python3.9[36096]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:37:55 compute-0 network[36113]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:37:55 compute-0 network[36114]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:37:55 compute-0 network[36115]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:37:55 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Oct  9 09:37:55 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Oct  9 09:37:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:55.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct  9 09:37:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct  9 09:37:56 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct  9 09:37:56 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 8002174f-b01a-4b45-b234-aeb2387c70eb (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  9 09:37:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Oct  9 09:37:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.d( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.2( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.5( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.6( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.a( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.f( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.3( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.1( v 41'42 (0'0,41'42] local-lis/les=14/15 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.7( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.8( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.9( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.c( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.4( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=14/15 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.d( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.0( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 41'41 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.5( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.6( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.a( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.2( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.3( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.1( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.7( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.c( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 50 pg[6.4( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=14/14 les/c/f=15/15/0 sis=49) [1] r=0 lpr=49 pi=[14,49)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:56 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b800a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v46: 182 pgs: 2 peering, 46 unknown, 134 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct  9 09:37:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 09:37:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 09:37:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:56 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Oct  9 09:37:56 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Oct  9 09:37:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:56.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:56 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct  9 09:37:57 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:57 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:57 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct  9 09:37:57 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct  9 09:37:57 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 23b6a74f-7c9f-44e1-9afa-d9507b2c0d17 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  9 09:37:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Oct  9 09:37:57 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:57 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:57 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct  9 09:37:57 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct  9 09:37:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:57.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct  9 09:37:58 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct  9 09:37:58 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct  9 09:37:58 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 3d18bb0f-dd5f-4b83-b1bf-e5a9526ac53d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  9 09:37:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Oct  9 09:37:58 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:58 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c40045b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:58 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:58 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:58 compute-0 python3.9[36380]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:37:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:58 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v49: 244 pgs: 2 peering, 108 unknown, 134 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct  9 09:37:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 09:37:58 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 09:37:58 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:58 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct  9 09:37:58 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct  9 09:37:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:58.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:58 compute-0 python3.9[36531]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:37:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:58 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b800a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct  9 09:37:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct  9 09:37:59 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] update: starting ev 6c535125-6c4f-40eb-be22-bdfd06306a5e (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev edce8db7-27fc-4d89-9900-cf12832dc773 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event edce8db7-27fc-4d89-9900-cf12832dc773 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 9 seconds
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 38d69e7a-0cc2-4efd-a67c-ddefa6a30f22 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 38d69e7a-0cc2-4efd-a67c-ddefa6a30f22 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 8 seconds
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev e0d77969-49a5-4c42-b179-9d6e4fa6d7c4 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event e0d77969-49a5-4c42-b179-9d6e4fa6d7c4 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 7 seconds
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 7d5c1714-423b-4a55-93c5-a89fd0b748e7 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 7d5c1714-423b-4a55-93c5-a89fd0b748e7 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 6 seconds
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev dbac6a0a-43ee-46d3-8d9b-b6e8ada172d6 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event dbac6a0a-43ee-46d3-8d9b-b6e8ada172d6 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 38ae45fd-6e98-48bc-a700-768f78b86a57 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 38ae45fd-6e98-48bc-a700-768f78b86a57 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 8002174f-b01a-4b45-b234-aeb2387c70eb (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 8002174f-b01a-4b45-b234-aeb2387c70eb (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 23b6a74f-7c9f-44e1-9afa-d9507b2c0d17 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 23b6a74f-7c9f-44e1-9afa-d9507b2c0d17 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 3d18bb0f-dd5f-4b83-b1bf-e5a9526ac53d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 3d18bb0f-dd5f-4b83-b1bf-e5a9526ac53d (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] complete: finished ev 6c535125-6c4f-40eb-be22-bdfd06306a5e (PG autoscaler increasing pool 12 PGs from 1 to 32)
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event 6c535125-6c4f-40eb-be22-bdfd06306a5e (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 51 pg[8.0( v 50'68 (0'0,50'68] local-lis/les=29/30 n=6 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=51 pruub=11.937980652s) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 50'67 mlcod 50'67 active pruub 200.543487549s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 51 pg[9.0( v 33'9 (0'0,33'9] local-lis/les=32/33 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=51 pruub=9.412178993s) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 33'8 mlcod 33'8 active pruub 198.017822266s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[11.0( v 40'96 (0'0,40'96] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=53 pruub=13.437383652s) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 40'95 mlcod 40'95 active pruub 202.043197632s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.0( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=51 pruub=9.412178993s) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 33'8 mlcod 0'0 unknown pruub 198.017822266s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563ba829f200) operator()   moving buffer(0x563ba7da6c08 space 0x563ba7af8830 0x0~1000 clean)
Oct  9 09:37:59 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563ba829f200) operator()   moving buffer(0x563ba7af6528 space 0x563ba7c245c0 0x0~1000 clean)
Oct  9 09:37:59 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563ba829f200) operator()   moving buffer(0x563ba7b794c8 space 0x563ba76a1c80 0x0~1000 clean)
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.0( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=51 pruub=11.937980652s) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 50'67 mlcod 0'0 unknown pruub 200.543487549s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x563ba829f200) operator()   moving buffer(0x563ba7da68e8 space 0x563ba7c764f0 0x0~1000 clean)
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.1( v 33'9 (0'0,33'9] local-lis/les=32/33 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.2( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.3( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.4( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.5( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.6( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.7( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.8( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.9( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.a( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.b( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.c( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.d( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.e( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.f( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.10( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.11( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.12( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.13( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.14( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.15( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.16( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.17( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[11.0( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=53 pruub=13.437383652s) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 40'95 mlcod 0'0 unknown pruub 202.043197632s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.18( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.19( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.1a( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.1b( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.1c( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.1d( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.1e( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[9.1f( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=32/33 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.5( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.3( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.4( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.2( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.6( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.1( v 50'68 (0'0,50'68] local-lis/les=29/30 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.7( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.8( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.9( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.a( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.b( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.c( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.d( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.e( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.f( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.10( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.11( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.12( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.13( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.14( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.15( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.16( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.17( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.18( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.19( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.1a( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.1b( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.1c( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.1d( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.1e( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 53 pg[8.1f( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=29/30 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:59 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:59 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:59 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:59 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:59 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:37:59 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:59 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct  9 09:37:59 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 25 completed events
Oct  9 09:37:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:37:59 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct  9 09:37:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:37:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:37:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:59.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:00 compute-0 python3.9[36685]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:38:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct  9 09:38:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct  9 09:38:00 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.11( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.12( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.13( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.14( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.15( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.18( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.19( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1a( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1b( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1c( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1d( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1e( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.3( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.17( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.d( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.f( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.16( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.b( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.8( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.7( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.10( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.2( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.c( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.e( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.a( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.9( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.6( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.5( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.4( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1f( v 40'96 lc 0'0 (0'0,40'96] local-lis/les=36/37 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1( v 40'96 (0'0,40'96] local-lis/les=36/37 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.11( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.13( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.10( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.11( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.10( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.13( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.11( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.12( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.14( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.17( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.16( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.16( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.15( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.17( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.1b( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.1a( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.18( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.19( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.1a( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.1b( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.19( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.18( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1a( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.18( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.19( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.1f( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1b( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1c( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.1e( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.1e( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.1f( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1d( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.1d( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.1c( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1e( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.0( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 40'95 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.2( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.0( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=29/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 50'67 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.1( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.3( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.17( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.3( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.14( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.15( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.e( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.f( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.d( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.c( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.d( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.f( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.16( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.15( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.8( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.b( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.14( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.9( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.a( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.b( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.4( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.5( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.8( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.13( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.10( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.12( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.7( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.0( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 33'8 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.d( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.12( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.1( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.c( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.e( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.e( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.a( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.f( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.9( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.9( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.b( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.a( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.8( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.5( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.2( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.6( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.6( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.7( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.4( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.5( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.7( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.4( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.6( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.c( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:00 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.3( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1f( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.1c( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[9.1d( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=32/32 les/c/f=33/33/0 sis=51) [1] r=0 lpr=51 pi=[32,51)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[11.1( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=36/36 les/c/f=37/37/0 sis=53) [1] r=0 lpr=53 pi=[36,53)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 54 pg[8.2( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=29/29 les/c/f=30/30/0 sis=51) [1] r=0 lpr=51 pi=[29,51)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:00 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v52: 306 pgs: 2 peering, 170 unknown, 134 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:38:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Oct  9 09:38:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:00 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct  9 09:38:00 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct  9 09:38:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:00.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:00 compute-0 python3.9[36845]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:38:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:00 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c40045b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct  9 09:38:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:38:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct  9 09:38:01 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct  9 09:38:01 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:01 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c40045b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:01 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct  9 09:38:01 compute-0 python3.9[36929]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:38:01 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct  9 09:38:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:01.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:02 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct  9 09:38:02 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:38:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct  9 09:38:02 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct  9 09:38:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v55: 337 pgs: 31 unknown, 32 peering, 274 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Oct  9 09:38:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:02] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Oct  9 09:38:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:02] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Oct  9 09:38:02 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct  9 09:38:02 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct  9 09:38:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:02.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:02 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:03 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:38:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:03 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:03 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:03 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b800a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:03 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct  9 09:38:03 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct  9 09:38:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:03.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:04 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c40056b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v56: 337 pgs: 31 unknown, 32 peering, 274 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 985 B/s wr, 2 op/s
Oct  9 09:38:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:38:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:38:04 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct  9 09:38:04 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct  9 09:38:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:04.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:04 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:05 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:05 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.0 deep-scrub starts
Oct  9 09:38:05 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.0 deep-scrub ok
Oct  9 09:38:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:05.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b800a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:38:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v57: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 09:38:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 09:38:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 09:38:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 09:38:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Oct  9 09:38:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 09:38:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Oct  9 09:38:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 09:38:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 09:38:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Oct  9 09:38:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:06.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:06 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct  9 09:38:06 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct  9 09:38:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:06 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c40056b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct  9 09:38:07 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  9 09:38:07 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  9 09:38:07 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct  9 09:38:07 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.13( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[12.6( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.8( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.9( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[12.e( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.6( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[12.b( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.2( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.e( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.b( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.4( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[12.c( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[12.8( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.3( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[12.a( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.f( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.10( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[12.1c( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.12( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898915291s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.623168945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1d( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.864549637s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.588806152s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1d( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.864530563s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.588806152s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.11( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.898221970s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.622634888s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.11( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.898208618s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.622634888s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.12( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898900032s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.623168945s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.10( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897874832s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.622436523s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.10( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897863388s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.622436523s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.13( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898148537s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.622817993s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.13( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898117065s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.622817993s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.12( v 56'69 (0'0,56'69] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.901682854s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=56'69 lcod 50'68 mlcod 50'68 active pruub 205.626571655s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.12( v 56'69 (0'0,56'69] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.901652336s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=56'69 lcod 50'68 mlcod 0'0 unknown NOTIFY pruub 205.626571655s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.14( v 56'99 (0'0,56'99] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898176193s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=56'97 lcod 56'98 mlcod 56'98 active pruub 205.623199463s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.13( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897457123s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.622436523s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.14( v 56'99 (0'0,56'99] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898148537s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=56'97 lcod 56'98 mlcod 0'0 unknown NOTIFY pruub 205.623199463s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.13( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897415161s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.622436523s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.863708496s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.588821411s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1b( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.863698959s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.588821411s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.17( v 56'69 (0'0,56'69] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.898182869s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=56'69 lcod 50'68 mlcod 50'68 active pruub 205.623367310s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.16( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.898163795s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.623382568s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.17( v 56'69 (0'0,56'69] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.898157120s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=56'69 lcod 50'68 mlcod 0'0 unknown NOTIFY pruub 205.623367310s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.16( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.898153305s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.623382568s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.10( v 54'71 (0'0,54'71] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897356033s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=54'71 lcod 54'70 mlcod 54'70 active pruub 205.622711182s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.10( v 54'71 (0'0,54'71] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897270203s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=54'71 lcod 54'70 mlcod 0'0 unknown NOTIFY pruub 205.622711182s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.864700317s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590255737s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1c( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.864685059s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590255737s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.11( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897212029s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.622863770s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.11( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897153854s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.622863770s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.862895966s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.588790894s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1a( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.862854958s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.588790894s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.16( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897247314s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.623458862s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.16( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897233009s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.623458862s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.17( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897313118s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.623535156s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.17( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897299767s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.623535156s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.1b( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897125244s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.623580933s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.1b( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897113800s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.623580933s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.15( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.863427162s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.589920044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.15( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.863416672s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.589920044s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.19( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897693634s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.624237061s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1b( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898726463s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.625305176s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1b( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898716927s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.625305176s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.863194466s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.589935303s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.14( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.863180161s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.589935303s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.18( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897452354s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.624252319s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1a( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898318291s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.625030518s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1a( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898163795s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.625030518s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.18( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.898238182s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.625076294s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.18( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.898095131s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.625076294s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.18( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897346497s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.624252319s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.862953186s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590026855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.13( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.862937927s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590026855s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.19( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897681236s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.624237061s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.1f( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897994995s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.625183105s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.1f( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897959709s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.625183105s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1c( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898771286s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.625732422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1c( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898476601s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.625732422s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.19( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.896513939s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.623931885s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.19( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.896502495s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.623931885s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1e( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898687363s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.626159668s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1e( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898677826s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626159668s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1d( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898573875s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.625946045s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.d( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.873598099s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.601669312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.3( v 56'99 (0'0,56'99] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898108482s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=56'97 lcod 56'98 mlcod 56'98 active pruub 205.626296997s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.3( v 56'99 (0'0,56'99] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898087502s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=56'97 lcod 56'98 mlcod 0'0 unknown NOTIFY pruub 205.626296997s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1d( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.897618294s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.625946045s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.c( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.861649513s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590011597s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.c( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.861636162s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590011597s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.861602783s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590026855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.18( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.861591339s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590026855s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.14( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897879601s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626358032s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.14( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897871971s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626358032s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.15( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897800446s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626373291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.15( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897789001s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626373291s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.18( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.861243248s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590011597s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.2( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.861232758s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590011597s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.f( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.897563934s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.626419067s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.f( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.897555351s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626419067s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.3( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.898041725s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626342773s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.f( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897479057s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626403809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.f( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897465706s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626403809s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[12.19( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.d( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.873517036s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.601669312s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[12.12( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.17( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.898002625s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.626312256s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.17( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.896940231s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626312256s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[12.10( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.3( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.897420883s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626342773s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.1e( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[7.1b( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.16( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.896073341s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.626434326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.15( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.896039009s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626434326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.16( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.896049500s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626434326s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.15( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.896026611s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626434326s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.19( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.859500885s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590057373s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.19( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.859436989s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590057373s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.d( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895748138s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626419067s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.d( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895704269s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626419067s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.9( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895706177s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626480103s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.9( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895695686s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626480103s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.c( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895587921s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626419067s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.8( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895835876s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626464844s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.8( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895595551s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626464844s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.5( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.872528076s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.603500366s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.5( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.872458458s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.603500366s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.8( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.895503998s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.626510620s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.c( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895499229s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626419067s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.7( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.895307541s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.626556396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.7( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.895290375s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626556396s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.8( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.895439148s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626510620s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.858713150s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590133667s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.a( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895259857s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626480103s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.8( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.858701706s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590133667s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.a( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895040512s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626480103s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.5( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895028114s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626510620s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.5( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.895017624s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626510620s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1f( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.858545303s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590133667s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.871965408s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.603561401s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.871955872s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.603561401s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.b( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.894680977s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626495361s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.b( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.894645691s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626495361s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1f( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.858533859s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590133667s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.858015060s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590148926s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.e( v 56'99 (0'0,56'99] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.894324303s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=56'97 lcod 56'98 mlcod 56'98 active pruub 205.626602173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.12( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.894159317s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626541138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.e( v 56'99 (0'0,56'99] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.894295692s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=56'97 lcod 56'98 mlcod 0'0 unknown NOTIFY pruub 205.626602173s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.d( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.857953072s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590148926s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.3( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.870911598s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.603576660s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.4( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893794060s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626495361s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.12( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.894070625s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626541138s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.4( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893720627s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626495361s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.3( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.870723724s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.603576660s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.d( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893544197s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626571655s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.857125282s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590164185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.d( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893530846s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626571655s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.1( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.857110977s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590164185s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.e( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893457413s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626602173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.e( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893447876s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626602173s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.1( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.870275497s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.603561401s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.1( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.870261192s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.603561401s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.f( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893175125s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626617432s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.7( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.870155334s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.603607178s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.7( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.870143890s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.603607178s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.9( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893085480s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626647949s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.9( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893074989s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626647949s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.3( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.856675148s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590225220s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.8( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893046379s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626708984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.8( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893032074s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626708984s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.856463432s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590240479s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.5( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.856453896s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590240479s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.6( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.856476784s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590270996s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.a( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.893367767s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.626602173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.6( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.856460571s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590270996s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.b( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.892647743s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626678467s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.b( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.892636299s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626678467s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.870057106s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.604171753s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.870045662s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.604171753s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.a( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.892490387s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626678467s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.856063843s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590270996s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.9( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.856052399s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590270996s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.a( v 40'96 (0'0,40'96] local-lis/les=53/54 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.892783165s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626602173s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.3( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.856506348s) [2] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590225220s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.855831146s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590270996s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.a( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.855820656s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590270996s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.6( v 56'69 (0'0,56'69] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.892189980s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=56'69 lcod 50'68 mlcod 50'68 active pruub 205.626739502s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.6( v 56'69 (0'0,56'69] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.892171860s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=56'69 lcod 50'68 mlcod 0'0 unknown NOTIFY pruub 205.626739502s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.5( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.892106056s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.626754761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.5( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.892096519s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626754761s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.5( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.891910553s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.626708984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.f( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.893148422s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626617432s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.5( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.891875267s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626708984s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.7( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.891962051s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626739502s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.869503975s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.604400635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.869488716s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.604400635s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.7( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.891808510s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626739502s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.a( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.892415047s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626678467s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.855032921s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 207.590255737s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[4.e( empty local-lis/les=47/48 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=10.855016708s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 207.590255737s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.4( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.891481400s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.626770020s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.4( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.891435623s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626770020s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.6( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.891343117s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.626785278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.3( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.896484375s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.631942749s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.3( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.896424294s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.631942749s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.1d( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.896512985s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active pruub 205.632247925s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.1d( v 33'9 (0'0,33'9] local-lis/les=51/54 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.896503448s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.632247925s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.2( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.896444321s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.632339478s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.2( v 50'68 (0'0,50'68] local-lis/les=51/54 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.896423340s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.632339478s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.1c( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.896118164s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active pruub 205.632141113s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[9.6( v 33'9 (0'0,33'9] local-lis/les=51/54 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.891292572s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.626785278s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[8.1c( v 50'68 (0'0,50'68] local-lis/les=51/54 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57 pruub=8.896104813s) [2] r=-1 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.632141113s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.896003723s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active pruub 205.632263184s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[11.1( v 40'96 (0'0,40'96] local-lis/les=53/54 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57 pruub=8.895830154s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.632263184s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.7( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[5.3( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[5.5( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[5.14( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.b( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.2( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.1( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[5.17( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.6( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.4( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.19( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[5.c( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.12( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[5.6( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[5.a( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.17( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[5.1e( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[5.1d( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.18( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.1e( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[5.19( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 57 pg[3.1f( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:07 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77cc003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct  9 09:38:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct  9 09:38:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:07.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:08 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct  9 09:38:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct  9 09:38:08 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[12.1c( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  9 09:38:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  9 09:38:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[12.12( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[12.10( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.19( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[5.1d( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.12( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[12.19( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[5.19( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[5.3( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[5.17( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[12.a( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[5.14( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.1f( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[12.8( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.7( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.b( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.1e( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[5.6( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[5.5( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[5.1e( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[12.c( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[5.c( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.6( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.4( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.2( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.18( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.1( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[12.b( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[12.e( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[12.6( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[5.a( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 58 pg[3.17( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [1] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v60: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 836 B/s wr, 2 op/s
Oct  9 09:38:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Oct  9 09:38:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  9 09:38:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Oct  9 09:38:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  9 09:38:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:08.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:08 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77b800a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:09 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct  9 09:38:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct  9 09:38:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  9 09:38:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  9 09:38:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct  9 09:38:09 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=10.877807617s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.601684570s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=10.877737999s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.601684570s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[6.6( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=10.879296303s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.603500366s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[6.6( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=10.879283905s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.603500366s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[6.a( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=10.878767014s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.603561401s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[6.a( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=10.878747940s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.603561401s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[6.2( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=10.878465652s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 209.603485107s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[6.2( v 41'42 (0'0,41'42] local-lis/les=49/50 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=10.878447533s) [0] r=-1 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.603485107s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[10.2( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 59 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59) [1] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  9 09:38:09 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  9 09:38:09 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  9 09:38:09 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  9 09:38:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:09 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:38:09 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct  9 09:38:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:09 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c40056b0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:09 compute-0 ceph-mgr[4772]: [progress INFO root] Completed event df0a4ca8-1317-4096-bc93-2fe8f37d2215 (Global Recovery Event) in 15 seconds
Oct  9 09:38:09 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct  9 09:38:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:09.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:09 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct  9 09:38:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:10 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c40056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct  9 09:38:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct  9 09:38:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.2( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.2( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 60 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=-1 lpr=60 pi=[53,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v63: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:38:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Oct  9 09:38:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  9 09:38:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Oct  9 09:38:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  9 09:38:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:10.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:10 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct  9 09:38:10 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct  9 09:38:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:10 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct  9 09:38:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  9 09:38:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  9 09:38:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct  9 09:38:11 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct  9 09:38:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  9 09:38:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  9 09:38:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  9 09:38:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  9 09:38:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:11 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:38:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:11.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:38:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct  9 09:38:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct  9 09:38:12 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 62 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:12 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77ac007720 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:12] "GET /metrics HTTP/1.1" 200 48359 "" "Prometheus/2.51.0"
Oct  9 09:38:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:12] "GET /metrics HTTP/1.1" 200 48359 "" "Prometheus/2.51.0"
Oct  9 09:38:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v66: 337 pgs: 5 active+recovery_wait+remapped, 1 active+recovering+remapped, 8 remapped+peering, 1 active+remapped, 9 peering, 313 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 813 B/s, 2 keys/s, 24 objects/s recovering
Oct  9 09:38:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:12 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:38:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:12 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:12.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [WARNING] 281/093812 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:38:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:12 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77c40056b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct  9 09:38:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct  9 09:38:13 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct  9 09:38:13 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 63 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:13 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 63 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:13 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 63 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:13 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 63 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:13 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 63 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:13 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 63 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:13 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 63 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:13 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 63 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62) [1] r=0 lpr=62 pi=[53,62)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:13 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77cc004570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:13 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.e scrub starts
Oct  9 09:38:13 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.e scrub ok
Oct  9 09:38:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:13.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:14 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[28950]: 09/10/2025 09:38:14 : epoch 68e78240 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f77cc004570 fd 48 proxy ignored for local
Oct  9 09:38:14 compute-0 kernel: ganesha.nfsd[36969]: segfault at 50 ip 00007f786664832e sp 00007f7835ffa210 error 4 in libntirpc.so.5.8[7f786662d000+2c000] likely on CPU 0 (core 0, socket 0)
Oct  9 09:38:14 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:38:14 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct  9 09:38:14 compute-0 systemd[1]: Started Process Core Dump (PID 37048/UID 0).
Oct  9 09:38:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v68: 337 pgs: 5 active+recovery_wait+remapped, 1 active+recovering+remapped, 8 remapped+peering, 1 active+remapped, 9 peering, 313 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 798 B/s, 2 keys/s, 24 objects/s recovering
Oct  9 09:38:14 compute-0 ceph-mgr[4772]: [progress INFO root] Writing back 26 completed events
Oct  9 09:38:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Oct  9 09:38:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:14.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:14 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct  9 09:38:14 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct  9 09:38:15 compute-0 systemd-coredump[37049]: Process 28954 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 59:#012#0  0x00007f786664832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:38:15 compute-0 systemd[1]: systemd-coredump@0-37048-0.service: Deactivated successfully.
Oct  9 09:38:15 compute-0 systemd[1]: systemd-coredump@0-37048-0.service: Consumed 1.006s CPU time.
Oct  9 09:38:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:15 compute-0 podman[37058]: 2025-10-09 09:38:15.256748926 +0000 UTC m=+0.021449573 container died ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 09:38:15 compute-0 systemd[1269]: Created slice User Background Tasks Slice.
Oct  9 09:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a390089c062eac4a79ed20d731673608a1e61a7af94d0780df1df839318b8ed8-merged.mount: Deactivated successfully.
Oct  9 09:38:15 compute-0 systemd[1269]: Starting Cleanup of User's Temporary Files and Directories...
Oct  9 09:38:15 compute-0 podman[37058]: 2025-10-09 09:38:15.27736145 +0000 UTC m=+0.042062077 container remove ae795f28e8cc40d40a12c989e9bbeb32107bf485450cd1f6b578cfeea442e1a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:38:15 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.2.0.compute-0.rlqbpy.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:38:15 compute-0 systemd[1269]: Finished Cleanup of User's Temporary Files and Directories.
Oct  9 09:38:15 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.2.0.compute-0.rlqbpy.service: Failed with result 'exit-code'.
Oct  9 09:38:15 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.2.0.compute-0.rlqbpy.service: Consumed 1.015s CPU time.
Oct  9 09:38:15 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct  9 09:38:15 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct  9 09:38:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:15.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v69: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 580 B/s, 3 keys/s, 24 objects/s recovering
Oct  9 09:38:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Oct  9 09:38:16 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  9 09:38:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Oct  9 09:38:16 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  9 09:38:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:16.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:16 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct  9 09:38:16 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct  9 09:38:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct  9 09:38:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  9 09:38:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  9 09:38:17 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  9 09:38:17 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  9 09:38:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct  9 09:38:17 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct  9 09:38:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [WARNING] 281/093817 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:38:17 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct  9 09:38:17 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct  9 09:38:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct  9 09:38:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct  9 09:38:18 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct  9 09:38:18 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  9 09:38:18 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  9 09:38:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v72: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 1 keys/s, 8 objects/s recovering
Oct  9 09:38:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Oct  9 09:38:18 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  9 09:38:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Oct  9 09:38:18 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  9 09:38:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:18.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:18 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1a deep-scrub starts
Oct  9 09:38:18 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1a deep-scrub ok
Oct  9 09:38:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct  9 09:38:19 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  9 09:38:19 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  9 09:38:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct  9 09:38:19 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct  9 09:38:19 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  9 09:38:19 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  9 09:38:19 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  9 09:38:19 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  9 09:38:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:38:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:38:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:38:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:38:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:38:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f49c20d23d0>)]
Oct  9 09:38:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct  9 09:38:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:38:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f49c215c700>)]
Oct  9 09:38:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Oct  9 09:38:19 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Oct  9 09:38:19 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Oct  9 09:38:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:19.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [WARNING] 281/093820 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 66 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[60,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 66 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[60,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 66 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=66) [1] r=0 lpr=66 pi=[61,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 66 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66) [1] r=0 lpr=66 pi=[60,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v74: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 1 keys/s, 8 objects/s recovering
Oct  9 09:38:20 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Oct  9 09:38:20 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  9 09:38:20 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Oct  9 09:38:20 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  9 09:38:20 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct  9 09:38:20 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  9 09:38:20 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  9 09:38:20 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct  9 09:38:20 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  9 09:38:20 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=-1 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=-1 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=67) [1]/[2] r=-1 lpr=67 pi=[61,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=67) [1]/[2] r=-1 lpr=67 pi=[61,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67 pruub=8.765403748s) [0] r=-1 lpr=67 pi=[62,67)/1 crt=40'1059 mlcod 0'0 active pruub 218.580963135s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67 pruub=8.765381813s) [0] r=-1 lpr=67 pi=[62,67)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 218.580963135s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67 pruub=8.764056206s) [0] r=-1 lpr=67 pi=[62,67)/1 crt=40'1059 mlcod 0'0 active pruub 218.580078125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67 pruub=8.763997078s) [0] r=-1 lpr=67 pi=[62,67)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 218.580078125s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=-1 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=-1 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=-1 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=-1 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67 pruub=8.764134407s) [0] r=-1 lpr=67 pi=[62,67)/1 crt=40'1059 mlcod 0'0 active pruub 218.580917358s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67 pruub=8.764114380s) [0] r=-1 lpr=67 pi=[62,67)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 218.580917358s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67 pruub=8.764392853s) [0] r=-1 lpr=67 pi=[62,67)/1 crt=40'1059 mlcod 0'0 active pruub 218.581283569s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67 pruub=8.764375687s) [0] r=-1 lpr=67 pi=[62,67)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 218.581283569s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 67 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:20.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:20 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct  9 09:38:20 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct  9 09:38:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct  9 09:38:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct  9 09:38:21 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct  9 09:38:21 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 68 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 68 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:21 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 68 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 68 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:21 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 68 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 68 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:21 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 68 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 68 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:21 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 68 pg[6.6( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=2 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:21 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 68 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67) [1] r=0 lpr=67 pi=[59,67)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:21 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  9 09:38:21 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  9 09:38:21 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct  9 09:38:21 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct  9 09:38:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:21.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct  9 09:38:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct  9 09:38:22 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=67/61 les/c/f=68/62/0 sis=69) [1] r=0 lpr=69 pi=[61,69)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=67/61 les/c/f=68/62/0 sis=69) [1] r=0 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=0/0 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 luod=0'0 crt=63'1071 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=0/0 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=63'1071 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:22 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 69 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:22] "GET /metrics HTTP/1.1" 200 48359 "" "Prometheus/2.51.0"
Oct  9 09:38:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:22] "GET /metrics HTTP/1.1" 200 48359 "" "Prometheus/2.51.0"
Oct  9 09:38:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v78: 337 pgs: 2 active+recovery_wait+remapped, 4 unknown, 4 remapped+peering, 4 peering, 1 active+recovering, 322 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 7/204 objects misplaced (3.431%); 111 B/s, 2 objects/s recovering
Oct  9 09:38:22 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.lwqgfy(active, since 92s), standbys: compute-2.takdnm, compute-1.etokpp
Oct  9 09:38:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:22.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:22 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct  9 09:38:22 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct  9 09:38:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct  9 09:38:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct  9 09:38:23 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.996118546s) [0] async=[0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 40'1059 active pruub 227.579925537s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.996513367s) [0] async=[0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 40'1059 active pruub 227.580352783s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.996471405s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.580352783s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.996006966s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.579925537s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995941162s) [0] async=[0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 40'1059 active pruub 227.580154419s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995877266s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.580154419s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995309830s) [0] async=[0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 40'1059 active pruub 227.579925537s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995251656s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.579925537s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=69/70 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=68'1074 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=67/61 les/c/f=68/62/0 sis=69) [1] r=0 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:23 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:23 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct  9 09:38:23 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct  9 09:38:23 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct  9 09:38:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:23.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct  9 09:38:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct  9 09:38:24 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct  9 09:38:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v81: 337 pgs: 2 active+recovery_wait+remapped, 4 unknown, 4 remapped+peering, 4 peering, 1 active+recovering, 322 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 7/204 objects misplaced (3.431%); 112 B/s, 2 objects/s recovering
Oct  9 09:38:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:24.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:24 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1d deep-scrub starts
Oct  9 09:38:24 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1d deep-scrub ok
Oct  9 09:38:25 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.2.0.compute-0.rlqbpy.service: Scheduled restart job, restart counter is at 1.
Oct  9 09:38:25 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rlqbpy for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:38:25 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.2.0.compute-0.rlqbpy.service: Consumed 1.015s CPU time.
Oct  9 09:38:25 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rlqbpy for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:38:25 compute-0 podman[37141]: 2025-10-09 09:38:25.719822035 +0000 UTC m=+0.032199170 container create 0a09c597d65e097eb06e8f66cc2d0a297a77462b3a08aa9a00c2370bbc7f53b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15754c0b2c9892376fd65a4076b792b3dd3288ec90c90a60215586bf16362e1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15754c0b2c9892376fd65a4076b792b3dd3288ec90c90a60215586bf16362e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15754c0b2c9892376fd65a4076b792b3dd3288ec90c90a60215586bf16362e1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15754c0b2c9892376fd65a4076b792b3dd3288ec90c90a60215586bf16362e1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rlqbpy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:25 compute-0 podman[37141]: 2025-10-09 09:38:25.762750985 +0000 UTC m=+0.075128140 container init 0a09c597d65e097eb06e8f66cc2d0a297a77462b3a08aa9a00c2370bbc7f53b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:38:25 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct  9 09:38:25 compute-0 podman[37141]: 2025-10-09 09:38:25.769786313 +0000 UTC m=+0.082163447 container start 0a09c597d65e097eb06e8f66cc2d0a297a77462b3a08aa9a00c2370bbc7f53b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:38:25 compute-0 bash[37141]: 0a09c597d65e097eb06e8f66cc2d0a297a77462b3a08aa9a00c2370bbc7f53b5
Oct  9 09:38:25 compute-0 podman[37141]: 2025-10-09 09:38:25.706117379 +0000 UTC m=+0.018494534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:38:25 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rlqbpy for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:38:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:25 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:38:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:25 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:38:25 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct  9 09:38:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:25 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:38:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:25 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:38:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:25 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:38:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:25 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:38:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:25 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:38:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:25 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:38:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:25.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 147 B/s, 9 objects/s recovering
Oct  9 09:38:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Oct  9 09:38:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  9 09:38:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Oct  9 09:38:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  9 09:38:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct  9 09:38:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  9 09:38:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  9 09:38:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct  9 09:38:26 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct  9 09:38:26 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  9 09:38:26 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  9 09:38:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:26.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:26 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Oct  9 09:38:26 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Oct  9 09:38:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct  9 09:38:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct  9 09:38:27 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct  9 09:38:27 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  9 09:38:27 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  9 09:38:27 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct  9 09:38:27 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct  9 09:38:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:27.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 9 objects/s recovering
Oct  9 09:38:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Oct  9 09:38:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 09:38:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Oct  9 09:38:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 09:38:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct  9 09:38:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 09:38:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 09:38:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct  9 09:38:28 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74 pruub=15.746602058s) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 233.604583740s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:28 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74 pruub=15.746577263s) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.604583740s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:28 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct  9 09:38:28 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:28 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 09:38:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 09:38:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 09:38:28 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 09:38:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:28.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:28 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Oct  9 09:38:28 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Oct  9 09:38:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct  9 09:38:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct  9 09:38:29 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct  9 09:38:29 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:29 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:29 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct  9 09:38:29 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct  9 09:38:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:38:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:29.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:38:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:38:30 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Oct  9 09:38:30 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  9 09:38:30 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Oct  9 09:38:30 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  9 09:38:30 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct  9 09:38:30 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  9 09:38:30 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  9 09:38:30 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct  9 09:38:30 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct  9 09:38:30 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  9 09:38:30 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  9 09:38:30 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  9 09:38:30 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  9 09:38:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:30.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:30 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.d deep-scrub starts
Oct  9 09:38:30 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.d deep-scrub ok
Oct  9 09:38:30 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct  9 09:38:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct  9 09:38:31 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct  9 09:38:31 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:31 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:31 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:31 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:31 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=76/77 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:31 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct  9 09:38:31 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct  9 09:38:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:31 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:38:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:31 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:31.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct  9 09:38:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct  9 09:38:32 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct  9 09:38:32 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:32 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:32] "GET /metrics HTTP/1.1" 200 48359 "" "Prometheus/2.51.0"
Oct  9 09:38:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:32] "GET /metrics HTTP/1.1" 200 48359 "" "Prometheus/2.51.0"
Oct  9 09:38:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v92: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.0 KiB/s wr, 2 op/s; 195 B/s, 7 objects/s recovering
Oct  9 09:38:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Oct  9 09:38:32 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  9 09:38:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Oct  9 09:38:32 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  9 09:38:32 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  9 09:38:32 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  9 09:38:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:32.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:32 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct  9 09:38:32 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct  9 09:38:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct  9 09:38:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  9 09:38:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  9 09:38:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct  9 09:38:33 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct  9 09:38:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  9 09:38:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  9 09:38:33 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct  9 09:38:33 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct  9 09:38:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:33.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct  9 09:38:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct  9 09:38:34 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct  9 09:38:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v95: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.0 KiB/s wr, 2 op/s; 195 B/s, 7 objects/s recovering
Oct  9 09:38:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Oct  9 09:38:34 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  9 09:38:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Oct  9 09:38:34 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  9 09:38:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  9 09:38:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  9 09:38:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=infra.usagestats t=2025-10-09T09:38:34.393816063Z level=info msg="Usage stats are ready to report"
Oct  9 09:38:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:38:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:38:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:34.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:34 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct  9 09:38:34 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct  9 09:38:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct  9 09:38:35 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  9 09:38:35 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  9 09:38:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct  9 09:38:35 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct  9 09:38:35 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:35 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.985282898s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 active pruub 234.581420898s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:35 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.985259056s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:35 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.984782219s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 active pruub 234.581420898s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:35 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.984754562s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:35 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  9 09:38:35 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  9 09:38:35 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct  9 09:38:35 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct  9 09:38:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:35.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct  9 09:38:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct  9 09:38:36 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct  9 09:38:36 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:36 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:36 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:36 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:36 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v98: 337 pgs: 4 unknown, 1 peering, 332 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:38:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000019s ======
Oct  9 09:38:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:36.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000019s
Oct  9 09:38:36 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct  9 09:38:36 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct  9 09:38:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct  9 09:38:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct  9 09:38:37 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct  9 09:38:37 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:37 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:37 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.13 deep-scrub starts
Oct  9 09:38:37 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.13 deep-scrub ok
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:38:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:37 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:38:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:37.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct  9 09:38:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct  9 09:38:38 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct  9 09:38:38 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.986342430s) [0] async=[0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 40'1059 active pruub 242.587265015s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:38 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.986305237s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587265015s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:38 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.985722542s) [0] async=[0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 40'1059 active pruub 242.587631226s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:38 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.985552788s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587631226s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:38 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v101: 337 pgs: 4 unknown, 1 peering, 332 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:38:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:38.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:38 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Oct  9 09:38:38 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Oct  9 09:38:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:38 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:39 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct  9 09:38:39 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct  9 09:38:39 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct  9 09:38:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:39 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:39 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct  9 09:38:39 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct  9 09:38:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:39.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [WARNING] 281/093840 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:38:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:40 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e4001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 4 unknown, 1 peering, 332 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:38:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:40.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:40 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Oct  9 09:38:40 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Oct  9 09:38:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:40 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e4001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:41 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d40026c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:41 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct  9 09:38:41 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct  9 09:38:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:41.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:42 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e4001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:42] "GET /metrics HTTP/1.1" 200 48365 "" "Prometheus/2.51.0"
Oct  9 09:38:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:42] "GET /metrics HTTP/1.1" 200 48365 "" "Prometheus/2.51.0"
Oct  9 09:38:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v104: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 36 op/s; 36 B/s, 4 objects/s recovering
Oct  9 09:38:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Oct  9 09:38:42 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  9 09:38:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Oct  9 09:38:42 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  9 09:38:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct  9 09:38:42 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  9 09:38:42 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  9 09:38:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct  9 09:38:42 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct  9 09:38:42 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  9 09:38:42 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  9 09:38:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:42.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:42 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct  9 09:38:42 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct  9 09:38:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:42 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:43 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  9 09:38:43 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  9 09:38:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:43 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:43 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct  9 09:38:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:43.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:43 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct  9 09:38:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:44 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d40035c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v106: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 35 op/s; 35 B/s, 4 objects/s recovering
Oct  9 09:38:44 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Oct  9 09:38:44 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  9 09:38:44 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Oct  9 09:38:44 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  9 09:38:44 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct  9 09:38:44 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  9 09:38:44 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  9 09:38:44 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct  9 09:38:44 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct  9 09:38:44 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  9 09:38:44 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  9 09:38:44 compute-0 podman[37434]: 2025-10-09 09:38:44.533572679 +0000 UTC m=+0.039477099 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:38:44 compute-0 podman[37434]: 2025-10-09 09:38:44.61544283 +0000 UTC m=+0.121347250 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 09:38:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:44.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:44 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct  9 09:38:44 compute-0 podman[37527]: 2025-10-09 09:38:44.92007735 +0000 UTC m=+0.035243698 container exec 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:38:44 compute-0 podman[37527]: 2025-10-09 09:38:44.926364722 +0000 UTC m=+0.041531070 container exec_died 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:38:44 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct  9 09:38:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:44 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:45 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.061434746s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 active pruub 244.586517334s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:45 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.061408043s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:45 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.060705185s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 active pruub 244.586517334s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:45 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.060690880s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:45 compute-0 podman[37613]: 2025-10-09 09:38:45.174270812 +0000 UTC m=+0.031667697 container exec 5c740331e43a547cef58f363bed860d932ba62ab932b4c8a13e2e8dac6839868 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:38:45 compute-0 podman[37613]: 2025-10-09 09:38:45.191631349 +0000 UTC m=+0.049028243 container exec_died 5c740331e43a547cef58f363bed860d932ba62ab932b4c8a13e2e8dac6839868 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:38:45 compute-0 podman[37671]: 2025-10-09 09:38:45.335394823 +0000 UTC m=+0.038380451 container exec d505ba96f4f8073a145fdc67466363156d038071ebcd8a8aeed53305dbe3584a (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:38:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct  9 09:38:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct  9 09:38:45 compute-0 podman[37671]: 2025-10-09 09:38:45.454453398 +0000 UTC m=+0.157439025 container exec_died d505ba96f4f8073a145fdc67466363156d038071ebcd8a8aeed53305dbe3584a (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 09:38:45 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct  9 09:38:45 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:45 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:45 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:45 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:45 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  9 09:38:45 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  9 09:38:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:45 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:45 compute-0 podman[37731]: 2025-10-09 09:38:45.594287496 +0000 UTC m=+0.035644223 container exec 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:38:45 compute-0 podman[37731]: 2025-10-09 09:38:45.603349849 +0000 UTC m=+0.044706567 container exec_died 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 09:38:45 compute-0 podman[37784]: 2025-10-09 09:38:45.742507011 +0000 UTC m=+0.034589766 container exec 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vcs-type=git, architecture=x86_64, io.openshift.tags=Ceph keepalived, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  9 09:38:45 compute-0 podman[37784]: 2025-10-09 09:38:45.75329622 +0000 UTC m=+0.045378974 container exec_died 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, release=1793, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, architecture=x86_64)
Oct  9 09:38:45 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct  9 09:38:45 compute-0 podman[37838]: 2025-10-09 09:38:45.893117954 +0000 UTC m=+0.033659972 container exec ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:38:45 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct  9 09:38:45 compute-0 podman[37838]: 2025-10-09 09:38:45.913567332 +0000 UTC m=+0.054109351 container exec_died ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:38:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:45.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:46 compute-0 podman[37884]: 2025-10-09 09:38:46.022308257 +0000 UTC m=+0.035532201 container exec 0a09c597d65e097eb06e8f66cc2d0a297a77462b3a08aa9a00c2370bbc7f53b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:38:46 compute-0 podman[37884]: 2025-10-09 09:38:46.032336281 +0000 UTC m=+0.045560205 container exec_died 0a09c597d65e097eb06e8f66cc2d0a297a77462b3a08aa9a00c2370bbc7f53b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:46 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:38:46 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:38:46 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v109: 337 pgs: 4 remapped+peering, 333 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 33 op/s; 36 B/s, 4 objects/s recovering
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct  9 09:38:46 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct  9 09:38:46 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:46 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:46 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:46 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:38:46 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:38:46 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:38:46 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:38:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:46.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:46 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:38:46 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:38:46 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:38:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:38:46 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:38:46 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct  9 09:38:46 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct  9 09:38:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:46 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4003ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:47 compute-0 podman[38100]: 2025-10-09 09:38:47.054154524 +0000 UTC m=+0.026029418 container create 157ca18f6707007d4bb3405ab2c584039fc92a47640a620562c6cf6436931b5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 09:38:47 compute-0 systemd[1]: Started libpod-conmon-157ca18f6707007d4bb3405ab2c584039fc92a47640a620562c6cf6436931b5e.scope.
Oct  9 09:38:47 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:38:47 compute-0 podman[38100]: 2025-10-09 09:38:47.104931182 +0000 UTC m=+0.076806097 container init 157ca18f6707007d4bb3405ab2c584039fc92a47640a620562c6cf6436931b5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:38:47 compute-0 podman[38100]: 2025-10-09 09:38:47.110565443 +0000 UTC m=+0.082440337 container start 157ca18f6707007d4bb3405ab2c584039fc92a47640a620562c6cf6436931b5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 09:38:47 compute-0 podman[38100]: 2025-10-09 09:38:47.111583462 +0000 UTC m=+0.083458356 container attach 157ca18f6707007d4bb3405ab2c584039fc92a47640a620562c6cf6436931b5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:38:47 compute-0 infallible_borg[38113]: 167 167
Oct  9 09:38:47 compute-0 systemd[1]: libpod-157ca18f6707007d4bb3405ab2c584039fc92a47640a620562c6cf6436931b5e.scope: Deactivated successfully.
Oct  9 09:38:47 compute-0 podman[38100]: 2025-10-09 09:38:47.113846518 +0000 UTC m=+0.085721412 container died 157ca18f6707007d4bb3405ab2c584039fc92a47640a620562c6cf6436931b5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 09:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf1d2682eee7f287527892bd06931ca9920035bd861bbd04a1e4a865ef51ca94-merged.mount: Deactivated successfully.
Oct  9 09:38:47 compute-0 podman[38100]: 2025-10-09 09:38:47.134819683 +0000 UTC m=+0.106694578 container remove 157ca18f6707007d4bb3405ab2c584039fc92a47640a620562c6cf6436931b5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_borg, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:38:47 compute-0 podman[38100]: 2025-10-09 09:38:47.044105692 +0000 UTC m=+0.015980605 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:38:47 compute-0 systemd[1]: libpod-conmon-157ca18f6707007d4bb3405ab2c584039fc92a47640a620562c6cf6436931b5e.scope: Deactivated successfully.
Oct  9 09:38:47 compute-0 podman[38134]: 2025-10-09 09:38:47.250764789 +0000 UTC m=+0.027946643 container create b3abe7df81dbb6311656d893aaf8fd6c93bb2b0bb5c5dc31a82867f71479662f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:38:47 compute-0 systemd[1]: Started libpod-conmon-b3abe7df81dbb6311656d893aaf8fd6c93bb2b0bb5c5dc31a82867f71479662f.scope.
Oct  9 09:38:47 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db62eda4015220dd139a866df49df155bbc69b9028a00c92f062b9b4a3847601/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db62eda4015220dd139a866df49df155bbc69b9028a00c92f062b9b4a3847601/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db62eda4015220dd139a866df49df155bbc69b9028a00c92f062b9b4a3847601/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db62eda4015220dd139a866df49df155bbc69b9028a00c92f062b9b4a3847601/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db62eda4015220dd139a866df49df155bbc69b9028a00c92f062b9b4a3847601/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:47 compute-0 podman[38134]: 2025-10-09 09:38:47.309789635 +0000 UTC m=+0.086971509 container init b3abe7df81dbb6311656d893aaf8fd6c93bb2b0bb5c5dc31a82867f71479662f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_johnson, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:38:47 compute-0 podman[38134]: 2025-10-09 09:38:47.314781284 +0000 UTC m=+0.091963137 container start b3abe7df81dbb6311656d893aaf8fd6c93bb2b0bb5c5dc31a82867f71479662f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 09:38:47 compute-0 podman[38134]: 2025-10-09 09:38:47.315942874 +0000 UTC m=+0.093124728 container attach b3abe7df81dbb6311656d893aaf8fd6c93bb2b0bb5c5dc31a82867f71479662f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_johnson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:38:47 compute-0 podman[38134]: 2025-10-09 09:38:47.240447651 +0000 UTC m=+0.017629524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:38:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct  9 09:38:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct  9 09:38:47 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct  9 09:38:47 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.990008354s) [0] async=[0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 40'1059 active pruub 251.979660034s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:47 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.989096642s) [0] async=[0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 40'1059 active pruub 251.979141235s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:47 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.988950729s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979141235s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:47 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.989036560s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979660034s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:47 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:38:47 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:47 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:47 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:38:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:47 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:47 compute-0 loving_johnson[38147]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:38:47 compute-0 loving_johnson[38147]: --> All data devices are unavailable
Oct  9 09:38:47 compute-0 systemd[1]: libpod-b3abe7df81dbb6311656d893aaf8fd6c93bb2b0bb5c5dc31a82867f71479662f.scope: Deactivated successfully.
Oct  9 09:38:47 compute-0 podman[38163]: 2025-10-09 09:38:47.60428949 +0000 UTC m=+0.017698745 container died b3abe7df81dbb6311656d893aaf8fd6c93bb2b0bb5c5dc31a82867f71479662f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_johnson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-db62eda4015220dd139a866df49df155bbc69b9028a00c92f062b9b4a3847601-merged.mount: Deactivated successfully.
Oct  9 09:38:47 compute-0 podman[38163]: 2025-10-09 09:38:47.623216668 +0000 UTC m=+0.036625913 container remove b3abe7df81dbb6311656d893aaf8fd6c93bb2b0bb5c5dc31a82867f71479662f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_johnson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:38:47 compute-0 systemd[1]: libpod-conmon-b3abe7df81dbb6311656d893aaf8fd6c93bb2b0bb5c5dc31a82867f71479662f.scope: Deactivated successfully.
Oct  9 09:38:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:47.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:47 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct  9 09:38:47 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct  9 09:38:48 compute-0 podman[38258]: 2025-10-09 09:38:48.053759113 +0000 UTC m=+0.027252564 container create 4be6ec481d82088ebb1efb5c71e297a195fdcbd2b7db420f64297ef389db0708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:38:48 compute-0 systemd[1]: Started libpod-conmon-4be6ec481d82088ebb1efb5c71e297a195fdcbd2b7db420f64297ef389db0708.scope.
Oct  9 09:38:48 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:38:48 compute-0 podman[38258]: 2025-10-09 09:38:48.112954902 +0000 UTC m=+0.086448374 container init 4be6ec481d82088ebb1efb5c71e297a195fdcbd2b7db420f64297ef389db0708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:38:48 compute-0 podman[38258]: 2025-10-09 09:38:48.117873774 +0000 UTC m=+0.091367235 container start 4be6ec481d82088ebb1efb5c71e297a195fdcbd2b7db420f64297ef389db0708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:38:48 compute-0 podman[38258]: 2025-10-09 09:38:48.118968739 +0000 UTC m=+0.092462190 container attach 4be6ec481d82088ebb1efb5c71e297a195fdcbd2b7db420f64297ef389db0708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ramanujan, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:38:48 compute-0 mystifying_ramanujan[38271]: 167 167
Oct  9 09:38:48 compute-0 systemd[1]: libpod-4be6ec481d82088ebb1efb5c71e297a195fdcbd2b7db420f64297ef389db0708.scope: Deactivated successfully.
Oct  9 09:38:48 compute-0 conmon[38271]: conmon 4be6ec481d82088ebb1e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4be6ec481d82088ebb1efb5c71e297a195fdcbd2b7db420f64297ef389db0708.scope/container/memory.events
Oct  9 09:38:48 compute-0 podman[38258]: 2025-10-09 09:38:48.123114363 +0000 UTC m=+0.096607814 container died 4be6ec481d82088ebb1efb5c71e297a195fdcbd2b7db420f64297ef389db0708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ramanujan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d0215906ed0ae66b11b0229f808db2a7e3877eb8d19f5207bcece58de6c2137-merged.mount: Deactivated successfully.
Oct  9 09:38:48 compute-0 podman[38258]: 2025-10-09 09:38:48.1390699 +0000 UTC m=+0.112563351 container remove 4be6ec481d82088ebb1efb5c71e297a195fdcbd2b7db420f64297ef389db0708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_ramanujan, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:38:48 compute-0 podman[38258]: 2025-10-09 09:38:48.042858305 +0000 UTC m=+0.016351775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:38:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:48 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:48 compute-0 systemd[1]: libpod-conmon-4be6ec481d82088ebb1efb5c71e297a195fdcbd2b7db420f64297ef389db0708.scope: Deactivated successfully.
Oct  9 09:38:48 compute-0 podman[38293]: 2025-10-09 09:38:48.261668868 +0000 UTC m=+0.028368157 container create b638b63b2865d431be5dfc47bfc19d8659ed3fbd4fa3ec4714fcbab730a928a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euclid, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:38:48 compute-0 systemd[1]: Started libpod-conmon-b638b63b2865d431be5dfc47bfc19d8659ed3fbd4fa3ec4714fcbab730a928a7.scope.
Oct  9 09:38:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v112: 337 pgs: 4 remapped+peering, 333 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:38:48 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd83f34b636a422a008803de1d6bbafa7b6fd2daf7e96e328ca83bd6add42bdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd83f34b636a422a008803de1d6bbafa7b6fd2daf7e96e328ca83bd6add42bdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd83f34b636a422a008803de1d6bbafa7b6fd2daf7e96e328ca83bd6add42bdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd83f34b636a422a008803de1d6bbafa7b6fd2daf7e96e328ca83bd6add42bdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:48 compute-0 podman[38293]: 2025-10-09 09:38:48.324336513 +0000 UTC m=+0.091035830 container init b638b63b2865d431be5dfc47bfc19d8659ed3fbd4fa3ec4714fcbab730a928a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euclid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 09:38:48 compute-0 podman[38293]: 2025-10-09 09:38:48.330857084 +0000 UTC m=+0.097556382 container start b638b63b2865d431be5dfc47bfc19d8659ed3fbd4fa3ec4714fcbab730a928a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:38:48 compute-0 podman[38293]: 2025-10-09 09:38:48.336289354 +0000 UTC m=+0.102988652 container attach b638b63b2865d431be5dfc47bfc19d8659ed3fbd4fa3ec4714fcbab730a928a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euclid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:38:48 compute-0 podman[38293]: 2025-10-09 09:38:48.250278718 +0000 UTC m=+0.016978026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:38:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct  9 09:38:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct  9 09:38:48 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct  9 09:38:48 compute-0 festive_euclid[38307]: {
Oct  9 09:38:48 compute-0 festive_euclid[38307]:    "1": [
Oct  9 09:38:48 compute-0 festive_euclid[38307]:        {
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "devices": [
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "/dev/loop3"
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            ],
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "lv_name": "ceph_lv0",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "lv_size": "21470642176",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "name": "ceph_lv0",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "tags": {
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.cluster_name": "ceph",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.crush_device_class": "",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.encrypted": "0",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.osd_id": "1",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.type": "block",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.vdo": "0",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:                "ceph.with_tpm": "0"
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            },
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "type": "block",
Oct  9 09:38:48 compute-0 festive_euclid[38307]:            "vg_name": "ceph_vg0"
Oct  9 09:38:48 compute-0 festive_euclid[38307]:        }
Oct  9 09:38:48 compute-0 festive_euclid[38307]:    ]
Oct  9 09:38:48 compute-0 festive_euclid[38307]: }
Oct  9 09:38:48 compute-0 systemd[1]: libpod-b638b63b2865d431be5dfc47bfc19d8659ed3fbd4fa3ec4714fcbab730a928a7.scope: Deactivated successfully.
Oct  9 09:38:48 compute-0 podman[38293]: 2025-10-09 09:38:48.568307946 +0000 UTC m=+0.335007244 container died b638b63b2865d431be5dfc47bfc19d8659ed3fbd4fa3ec4714fcbab730a928a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 09:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd83f34b636a422a008803de1d6bbafa7b6fd2daf7e96e328ca83bd6add42bdc-merged.mount: Deactivated successfully.
Oct  9 09:38:48 compute-0 podman[38293]: 2025-10-09 09:38:48.592420991 +0000 UTC m=+0.359120289 container remove b638b63b2865d431be5dfc47bfc19d8659ed3fbd4fa3ec4714fcbab730a928a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:38:48 compute-0 systemd[1]: libpod-conmon-b638b63b2865d431be5dfc47bfc19d8659ed3fbd4fa3ec4714fcbab730a928a7.scope: Deactivated successfully.
Oct  9 09:38:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:48.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:48 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct  9 09:38:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:48 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e00025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:49 compute-0 podman[38434]: 2025-10-09 09:38:49.003690434 +0000 UTC m=+0.031309171 container create bba630486ceb6b4e989cd09912a671c255c971e002ef5eea36153336bc827909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:38:49 compute-0 systemd[1]: Started libpod-conmon-bba630486ceb6b4e989cd09912a671c255c971e002ef5eea36153336bc827909.scope.
Oct  9 09:38:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:38:49 compute-0 podman[38434]: 2025-10-09 09:38:49.052361133 +0000 UTC m=+0.079979889 container init bba630486ceb6b4e989cd09912a671c255c971e002ef5eea36153336bc827909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:38:49 compute-0 podman[38434]: 2025-10-09 09:38:49.057366027 +0000 UTC m=+0.084984764 container start bba630486ceb6b4e989cd09912a671c255c971e002ef5eea36153336bc827909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:38:49 compute-0 podman[38434]: 2025-10-09 09:38:49.058438719 +0000 UTC m=+0.086057455 container attach bba630486ceb6b4e989cd09912a671c255c971e002ef5eea36153336bc827909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_engelbart, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:38:49 compute-0 practical_engelbart[38448]: 167 167
Oct  9 09:38:49 compute-0 systemd[1]: libpod-bba630486ceb6b4e989cd09912a671c255c971e002ef5eea36153336bc827909.scope: Deactivated successfully.
Oct  9 09:38:49 compute-0 podman[38434]: 2025-10-09 09:38:49.060634939 +0000 UTC m=+0.088253675 container died bba630486ceb6b4e989cd09912a671c255c971e002ef5eea36153336bc827909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_engelbart, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe99ffc6ffebfae531c65cc6ab7e1b0cabca7ab020ab90589ad364ce294b73b3-merged.mount: Deactivated successfully.
Oct  9 09:38:49 compute-0 podman[38434]: 2025-10-09 09:38:49.083274046 +0000 UTC m=+0.110892783 container remove bba630486ceb6b4e989cd09912a671c255c971e002ef5eea36153336bc827909 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:38:49 compute-0 podman[38434]: 2025-10-09 09:38:48.990642007 +0000 UTC m=+0.018260764 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:38:49 compute-0 systemd[1]: libpod-conmon-bba630486ceb6b4e989cd09912a671c255c971e002ef5eea36153336bc827909.scope: Deactivated successfully.
Oct  9 09:38:49 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct  9 09:38:49 compute-0 podman[38470]: 2025-10-09 09:38:49.202159855 +0000 UTC m=+0.029607564 container create f0795e969dd8d30a2583b5494a097e3f6104a36226a3fa38fee7c54e9873fcb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_newton, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:38:49 compute-0 systemd[1]: Started libpod-conmon-f0795e969dd8d30a2583b5494a097e3f6104a36226a3fa38fee7c54e9873fcb8.scope.
Oct  9 09:38:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cde4e71f724cd41bc3c1c3a42c4df31d666db07d9e8e4d6bd2a56797db2f38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cde4e71f724cd41bc3c1c3a42c4df31d666db07d9e8e4d6bd2a56797db2f38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cde4e71f724cd41bc3c1c3a42c4df31d666db07d9e8e4d6bd2a56797db2f38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5cde4e71f724cd41bc3c1c3a42c4df31d666db07d9e8e4d6bd2a56797db2f38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:49 compute-0 podman[38470]: 2025-10-09 09:38:49.257655158 +0000 UTC m=+0.085102866 container init f0795e969dd8d30a2583b5494a097e3f6104a36226a3fa38fee7c54e9873fcb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:38:49 compute-0 podman[38470]: 2025-10-09 09:38:49.263781987 +0000 UTC m=+0.091229686 container start f0795e969dd8d30a2583b5494a097e3f6104a36226a3fa38fee7c54e9873fcb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_newton, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:38:49 compute-0 podman[38470]: 2025-10-09 09:38:49.265198908 +0000 UTC m=+0.092646617 container attach f0795e969dd8d30a2583b5494a097e3f6104a36226a3fa38fee7c54e9873fcb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Oct  9 09:38:49 compute-0 podman[38470]: 2025-10-09 09:38:49.191003164 +0000 UTC m=+0.018450892 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:38:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:49 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4003ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:38:49
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Some PGs (0.011869) are inactive; try again later
Oct  9 09:38:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:38:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:38:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:38:49 compute-0 dreamy_newton[38483]: {}
Oct  9 09:38:49 compute-0 lvm[38560]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:38:49 compute-0 lvm[38560]: VG ceph_vg0 finished
Oct  9 09:38:49 compute-0 systemd[1]: libpod-f0795e969dd8d30a2583b5494a097e3f6104a36226a3fa38fee7c54e9873fcb8.scope: Deactivated successfully.
Oct  9 09:38:49 compute-0 podman[38470]: 2025-10-09 09:38:49.766916683 +0000 UTC m=+0.594364392 container died f0795e969dd8d30a2583b5494a097e3f6104a36226a3fa38fee7c54e9873fcb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5cde4e71f724cd41bc3c1c3a42c4df31d666db07d9e8e4d6bd2a56797db2f38-merged.mount: Deactivated successfully.
Oct  9 09:38:49 compute-0 podman[38470]: 2025-10-09 09:38:49.789171776 +0000 UTC m=+0.616619485 container remove f0795e969dd8d30a2583b5494a097e3f6104a36226a3fa38fee7c54e9873fcb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_newton, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:38:49 compute-0 systemd[1]: libpod-conmon-f0795e969dd8d30a2583b5494a097e3f6104a36226a3fa38fee7c54e9873fcb8.scope: Deactivated successfully.
Oct  9 09:38:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:38:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:38:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:49.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:49 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct  9 09:38:49 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct  9 09:38:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:50 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e4008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v114: 337 pgs: 4 remapped+peering, 333 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:38:50 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:50 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:50.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Oct  9 09:38:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Oct  9 09:38:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:50 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4004bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:51 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e0004290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:51 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Oct  9 09:38:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:51.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:51 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Oct  9 09:38:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:52 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4004bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:52] "GET /metrics HTTP/1.1" 200 48361 "" "Prometheus/2.51.0"
Oct  9 09:38:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:38:52] "GET /metrics HTTP/1.1" 200 48361 "" "Prometheus/2.51.0"
Oct  9 09:38:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v115: 337 pgs: 337 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 4 objects/s recovering
Oct  9 09:38:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Oct  9 09:38:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  9 09:38:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Oct  9 09:38:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  9 09:38:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct  9 09:38:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  9 09:38:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  9 09:38:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct  9 09:38:52 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct  9 09:38:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  9 09:38:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  9 09:38:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:52.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:52 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=8.152629852s) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 41'42 active pruub 250.579452515s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:52 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=8.152599335s) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 250.579452515s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:52 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct  9 09:38:52 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct  9 09:38:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:52 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e4009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:53 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct  9 09:38:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  9 09:38:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  9 09:38:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct  9 09:38:53 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct  9 09:38:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:53.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:53 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Oct  9 09:38:53 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Oct  9 09:38:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:54 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v118: 337 pgs: 337 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 4 objects/s recovering
Oct  9 09:38:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Oct  9 09:38:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  9 09:38:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Oct  9 09:38:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  9 09:38:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct  9 09:38:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  9 09:38:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  9 09:38:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  9 09:38:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  9 09:38:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct  9 09:38:54 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct  9 09:38:54 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:54.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:54 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Oct  9 09:38:54 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Oct  9 09:38:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:54 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:55 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e4009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct  9 09:38:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  9 09:38:55 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  9 09:38:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct  9 09:38:55 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct  9 09:38:55 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:55 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.12 deep-scrub starts
Oct  9 09:38:55 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.12 deep-scrub ok
Oct  9 09:38:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:55.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:56 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v121: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:38:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct  9 09:38:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct  9 09:38:56 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct  9 09:38:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:56.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [WARNING] 281/093856 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:38:56 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.19 deep-scrub starts
Oct  9 09:38:56 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.19 deep-scrub ok
Oct  9 09:38:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:57 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:57 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct  9 09:38:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct  9 09:38:57 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct  9 09:38:57 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct  9 09:38:57 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct  9 09:38:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:57.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:58 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e400a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v124: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:38:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct  9 09:38:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct  9 09:38:58 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct  9 09:38:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:58.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:58 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct  9 09:38:58 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct  9 09:38:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:59 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:38:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:38:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:38:59 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:59 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct  9 09:38:59 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct  9 09:38:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:38:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:59.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:00 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v126: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:39:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:39:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:00.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:00 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.a deep-scrub starts
Oct  9 09:39:00 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.a deep-scrub ok
Oct  9 09:39:00 compute-0 python3.9[38759]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:39:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:01 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e400a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:01 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:01 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct  9 09:39:01 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct  9 09:39:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:39:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:01.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:02 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:39:02] "GET /metrics HTTP/1.1" 200 48361 "" "Prometheus/2.51.0"
Oct  9 09:39:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:39:02] "GET /metrics HTTP/1.1" 200 48361 "" "Prometheus/2.51.0"
Oct  9 09:39:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v127: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 21 op/s; 212 B/s, 6 objects/s recovering
Oct  9 09:39:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Oct  9 09:39:02 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  9 09:39:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct  9 09:39:02 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  9 09:39:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct  9 09:39:02 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct  9 09:39:02 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  9 09:39:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:39:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:02.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:02 compute-0 python3.9[39048]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  9 09:39:02 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct  9 09:39:02 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct  9 09:39:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:03 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:03 compute-0 python3.9[39200]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  9 09:39:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:03 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct  9 09:39:03 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  9 09:39:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct  9 09:39:03 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct  9 09:39:03 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct  9 09:39:03 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct  9 09:39:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:39:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:03.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:03 compute-0 python3.9[39352]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:39:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:04 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v130: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 21 op/s; 212 B/s, 6 objects/s recovering
Oct  9 09:39:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Oct  9 09:39:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  9 09:39:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:39:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:39:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct  9 09:39:04 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  9 09:39:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  9 09:39:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct  9 09:39:04 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct  9 09:39:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:39:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:04.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:04 compute-0 python3.9[39506]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  9 09:39:04 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct  9 09:39:04 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct  9 09:39:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:05 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e0004290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:05 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e400a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct  9 09:39:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct  9 09:39:05 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct  9 09:39:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  9 09:39:05 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct  9 09:39:05 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct  9 09:39:05 compute-0 python3.9[39658]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:39:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:39:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:05.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:06 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:06 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:39:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v133: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct  9 09:39:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Oct  9 09:39:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  9 09:39:06 compute-0 python3.9[39811]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:39:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct  9 09:39:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  9 09:39:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct  9 09:39:06 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103 pruub=10.431035995s) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 active pruub 266.582458496s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:06 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103 pruub=10.430859566s) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 266.582458496s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:06 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct  9 09:39:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  9 09:39:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  9 09:39:06 compute-0 python3.9[39890]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:39:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:39:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:06.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:06 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct  9 09:39:06 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct  9 09:39:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:07 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4005900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:07 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e0004290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct  9 09:39:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct  9 09:39:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:07 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct  9 09:39:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct  9 09:39:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct  9 09:39:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:39:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:07.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:08 compute-0 python3.9[40042]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  9 09:39:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:08 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e400a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v136: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct  9 09:39:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Oct  9 09:39:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  9 09:39:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct  9 09:39:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  9 09:39:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct  9 09:39:08 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct  9 09:39:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  9 09:39:08 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  9 09:39:08 compute-0 python3.9[40222]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  9 09:39:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:39:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:08.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:08 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct  9 09:39:08 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct  9 09:39:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:09 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4007cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:09 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:39:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:09 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:39:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:09 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84d4007cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:09 compute-0 python3.9[40375]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 09:39:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct  9 09:39:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct  9 09:39:09 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct  9 09:39:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106 pruub=15.447549820s) [2] async=[2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 40'1059 active pruub 274.632110596s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:09 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106 pruub=15.447519302s) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 274.632110596s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:09 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct  9 09:39:09 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct  9 09:39:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:39:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:09.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:10 compute-0 python3.9[40529]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  9 09:39:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[37152]: 09/10/2025 09:39:10 : epoch 68e78291 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84e0004290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v139: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  9 09:39:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Oct  9 09:39:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  9 09:39:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct  9 09:39:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  9 09:39:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct  9 09:39:10 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct  9 09:39:10 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:42:02 compute-0 podman[67643]: 2025-10-09 09:42:02.963833909 +0000 UTC m=+0.029365592 container create 4e3ea918cae9ea65c5bce81ae0cb9e4e7d6662c7a64f73ec1054bc8ea5d7f041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:42:02 compute-0 systemd[1]: Started libpod-conmon-4e3ea918cae9ea65c5bce81ae0cb9e4e7d6662c7a64f73ec1054bc8ea5d7f041.scope.
Oct  9 09:42:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:42:03 compute-0 podman[67643]: 2025-10-09 09:42:03.017372376 +0000 UTC m=+0.082904060 container init 4e3ea918cae9ea65c5bce81ae0cb9e4e7d6662c7a64f73ec1054bc8ea5d7f041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_bardeen, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:42:03 compute-0 podman[67643]: 2025-10-09 09:42:03.023002661 +0000 UTC m=+0.088534344 container start 4e3ea918cae9ea65c5bce81ae0cb9e4e7d6662c7a64f73ec1054bc8ea5d7f041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_bardeen, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:42:03 compute-0 podman[67643]: 2025-10-09 09:42:03.024313256 +0000 UTC m=+0.089844940 container attach 4e3ea918cae9ea65c5bce81ae0cb9e4e7d6662c7a64f73ec1054bc8ea5d7f041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_bardeen, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 09:42:03 compute-0 angry_bardeen[67657]: 167 167
Oct  9 09:42:03 compute-0 systemd[1]: libpod-4e3ea918cae9ea65c5bce81ae0cb9e4e7d6662c7a64f73ec1054bc8ea5d7f041.scope: Deactivated successfully.
Oct  9 09:42:03 compute-0 podman[67643]: 2025-10-09 09:42:03.027293094 +0000 UTC m=+0.092824767 container died 4e3ea918cae9ea65c5bce81ae0cb9e4e7d6662c7a64f73ec1054bc8ea5d7f041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:42:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bf0e1017ab0c0ccd4fecbb125e5cb2718543a537b1e95b33a2ff284f615119c-merged.mount: Deactivated successfully.
Oct  9 09:42:03 compute-0 podman[67643]: 2025-10-09 09:42:02.952567601 +0000 UTC m=+0.018099295 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:42:03 compute-0 podman[67643]: 2025-10-09 09:42:03.056424224 +0000 UTC m=+0.121955907 container remove 4e3ea918cae9ea65c5bce81ae0cb9e4e7d6662c7a64f73ec1054bc8ea5d7f041 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:42:03 compute-0 systemd[1]: libpod-conmon-4e3ea918cae9ea65c5bce81ae0cb9e4e7d6662c7a64f73ec1054bc8ea5d7f041.scope: Deactivated successfully.
Oct  9 09:42:03 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:42:03 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:03 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:03 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:42:03 compute-0 python3.9[67642]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:03 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7858009580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:03 compute-0 rsyslogd[1243]: imjournal: 2961 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  9 09:42:03 compute-0 podman[67684]: 2025-10-09 09:42:03.183551447 +0000 UTC m=+0.039198786 container create 41908a356c6c8b777fdca1c5949ac33d12d55ad8142e80e5533be32df7957e3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Oct  9 09:42:03 compute-0 systemd[1]: Started libpod-conmon-41908a356c6c8b777fdca1c5949ac33d12d55ad8142e80e5533be32df7957e3f.scope.
Oct  9 09:42:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d82b787e02aa7678d9a1d399e5532cec315963a5315f8141d1b85c9599c51c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d82b787e02aa7678d9a1d399e5532cec315963a5315f8141d1b85c9599c51c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d82b787e02aa7678d9a1d399e5532cec315963a5315f8141d1b85c9599c51c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d82b787e02aa7678d9a1d399e5532cec315963a5315f8141d1b85c9599c51c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d82b787e02aa7678d9a1d399e5532cec315963a5315f8141d1b85c9599c51c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:03 compute-0 podman[67684]: 2025-10-09 09:42:03.231004802 +0000 UTC m=+0.086652160 container init 41908a356c6c8b777fdca1c5949ac33d12d55ad8142e80e5533be32df7957e3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_greider, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:42:03 compute-0 podman[67684]: 2025-10-09 09:42:03.237302526 +0000 UTC m=+0.092949865 container start 41908a356c6c8b777fdca1c5949ac33d12d55ad8142e80e5533be32df7957e3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_greider, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 09:42:03 compute-0 podman[67684]: 2025-10-09 09:42:03.238551736 +0000 UTC m=+0.094199075 container attach 41908a356c6c8b777fdca1c5949ac33d12d55ad8142e80e5533be32df7957e3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_greider, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:42:03 compute-0 podman[67684]: 2025-10-09 09:42:03.163905812 +0000 UTC m=+0.019553172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:42:03 compute-0 quizzical_greider[67739]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:42:03 compute-0 quizzical_greider[67739]: --> All data devices are unavailable
Oct  9 09:42:03 compute-0 systemd[1]: libpod-41908a356c6c8b777fdca1c5949ac33d12d55ad8142e80e5533be32df7957e3f.scope: Deactivated successfully.
Oct  9 09:42:03 compute-0 podman[67684]: 2025-10-09 09:42:03.504213628 +0000 UTC m=+0.359860967 container died 41908a356c6c8b777fdca1c5949ac33d12d55ad8142e80e5533be32df7957e3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_greider, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:42:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-62d82b787e02aa7678d9a1d399e5532cec315963a5315f8141d1b85c9599c51c-merged.mount: Deactivated successfully.
Oct  9 09:42:03 compute-0 python3.9[67819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002922.7281468-502-28044177043578/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=33c8f33573977531b53684e2994bebf61fcf3afe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:03 compute-0 podman[67684]: 2025-10-09 09:42:03.526619356 +0000 UTC m=+0.382266694 container remove 41908a356c6c8b777fdca1c5949ac33d12d55ad8142e80e5533be32df7957e3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 09:42:03 compute-0 systemd[1]: libpod-conmon-41908a356c6c8b777fdca1c5949ac33d12d55ad8142e80e5533be32df7957e3f.scope: Deactivated successfully.
Oct  9 09:42:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:03 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7850003310 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:03 compute-0 podman[67944]: 2025-10-09 09:42:03.918695795 +0000 UTC m=+0.025366931 container create 19d75a6df090b132cea9906f65bb376da9e4d238aa56508cd4bb8f976eb6d573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_easley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:42:03 compute-0 systemd[1]: Started libpod-conmon-19d75a6df090b132cea9906f65bb376da9e4d238aa56508cd4bb8f976eb6d573.scope.
Oct  9 09:42:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:42:03 compute-0 podman[67944]: 2025-10-09 09:42:03.967712572 +0000 UTC m=+0.074383728 container init 19d75a6df090b132cea9906f65bb376da9e4d238aa56508cd4bb8f976eb6d573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:42:03 compute-0 podman[67944]: 2025-10-09 09:42:03.971890062 +0000 UTC m=+0.078561198 container start 19d75a6df090b132cea9906f65bb376da9e4d238aa56508cd4bb8f976eb6d573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Oct  9 09:42:03 compute-0 podman[67944]: 2025-10-09 09:42:03.972987846 +0000 UTC m=+0.079658981 container attach 19d75a6df090b132cea9906f65bb376da9e4d238aa56508cd4bb8f976eb6d573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:42:03 compute-0 clever_easley[67957]: 167 167
Oct  9 09:42:03 compute-0 systemd[1]: libpod-19d75a6df090b132cea9906f65bb376da9e4d238aa56508cd4bb8f976eb6d573.scope: Deactivated successfully.
Oct  9 09:42:03 compute-0 podman[67944]: 2025-10-09 09:42:03.975339808 +0000 UTC m=+0.082010943 container died 19d75a6df090b132cea9906f65bb376da9e4d238aa56508cd4bb8f976eb6d573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_easley, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 09:42:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0d3fe142e0d8f4191aa91e14adc75a7580373e9332c7444bda58e70027cf4a1-merged.mount: Deactivated successfully.
Oct  9 09:42:03 compute-0 podman[67944]: 2025-10-09 09:42:03.994706426 +0000 UTC m=+0.101377561 container remove 19d75a6df090b132cea9906f65bb376da9e4d238aa56508cd4bb8f976eb6d573 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:42:03 compute-0 podman[67944]: 2025-10-09 09:42:03.908118568 +0000 UTC m=+0.014789724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:42:04 compute-0 systemd[1]: libpod-conmon-19d75a6df090b132cea9906f65bb376da9e4d238aa56508cd4bb8f976eb6d573.scope: Deactivated successfully.
Oct  9 09:42:04 compute-0 podman[67980]: 2025-10-09 09:42:04.107323904 +0000 UTC m=+0.031131248 container create 83097088ee8d99b001fe92134ab86696c1c195144f5a68fe698804bf333046e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hawking, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:42:04 compute-0 systemd[1]: Started libpod-conmon-83097088ee8d99b001fe92134ab86696c1c195144f5a68fe698804bf333046e6.scope.
Oct  9 09:42:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07801f48b3b218855f6d059894bbfae81087d42b628239cd571d4daa8eeee5c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07801f48b3b218855f6d059894bbfae81087d42b628239cd571d4daa8eeee5c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07801f48b3b218855f6d059894bbfae81087d42b628239cd571d4daa8eeee5c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07801f48b3b218855f6d059894bbfae81087d42b628239cd571d4daa8eeee5c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:04 compute-0 podman[67980]: 2025-10-09 09:42:04.157993553 +0000 UTC m=+0.081800907 container init 83097088ee8d99b001fe92134ab86696c1c195144f5a68fe698804bf333046e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hawking, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:42:04 compute-0 podman[67980]: 2025-10-09 09:42:04.163165411 +0000 UTC m=+0.086972745 container start 83097088ee8d99b001fe92134ab86696c1c195144f5a68fe698804bf333046e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hawking, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:42:04 compute-0 podman[67980]: 2025-10-09 09:42:04.164352193 +0000 UTC m=+0.088159547 container attach 83097088ee8d99b001fe92134ab86696c1c195144f5a68fe698804bf333046e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hawking, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:42:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:04.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:04 compute-0 podman[67980]: 2025-10-09 09:42:04.091745752 +0000 UTC m=+0.015553105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:42:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:04 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78880023d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]: {
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:    "1": [
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:        {
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "devices": [
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "/dev/loop3"
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            ],
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "lv_name": "ceph_lv0",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "lv_size": "21470642176",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "name": "ceph_lv0",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "tags": {
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.cluster_name": "ceph",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.crush_device_class": "",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.encrypted": "0",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.osd_id": "1",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.type": "block",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.vdo": "0",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:                "ceph.with_tpm": "0"
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            },
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "type": "block",
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:            "vg_name": "ceph_vg0"
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:        }
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]:    ]
Oct  9 09:42:04 compute-0 quizzical_hawking[68025]: }
Oct  9 09:42:04 compute-0 systemd[1]: libpod-83097088ee8d99b001fe92134ab86696c1c195144f5a68fe698804bf333046e6.scope: Deactivated successfully.
Oct  9 09:42:04 compute-0 conmon[68025]: conmon 83097088ee8d99b001fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83097088ee8d99b001fe92134ab86696c1c195144f5a68fe698804bf333046e6.scope/container/memory.events
Oct  9 09:42:04 compute-0 podman[68131]: 2025-10-09 09:42:04.454990799 +0000 UTC m=+0.016587570 container died 83097088ee8d99b001fe92134ab86696c1c195144f5a68fe698804bf333046e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hawking, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:42:04 compute-0 python3.9[68126]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-07801f48b3b218855f6d059894bbfae81087d42b628239cd571d4daa8eeee5c9-merged.mount: Deactivated successfully.
Oct  9 09:42:04 compute-0 podman[68131]: 2025-10-09 09:42:04.47677408 +0000 UTC m=+0.038370852 container remove 83097088ee8d99b001fe92134ab86696c1c195144f5a68fe698804bf333046e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hawking, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  9 09:42:04 compute-0 systemd[1]: libpod-conmon-83097088ee8d99b001fe92134ab86696c1c195144f5a68fe698804bf333046e6.scope: Deactivated successfully.
Oct  9 09:42:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:42:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:42:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:04.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:04 compute-0 podman[68378]: 2025-10-09 09:42:04.897270937 +0000 UTC m=+0.026148618 container create ca4dff4b40d7c5fc06cc9e680da488e5dc312875da825dd5b25473a1796fd76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_einstein, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:42:04 compute-0 python3.9[68348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:04 compute-0 systemd[1]: Started libpod-conmon-ca4dff4b40d7c5fc06cc9e680da488e5dc312875da825dd5b25473a1796fd76e.scope.
Oct  9 09:42:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:42:04 compute-0 podman[68378]: 2025-10-09 09:42:04.951528873 +0000 UTC m=+0.080406554 container init ca4dff4b40d7c5fc06cc9e680da488e5dc312875da825dd5b25473a1796fd76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_einstein, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 09:42:04 compute-0 podman[68378]: 2025-10-09 09:42:04.956174447 +0000 UTC m=+0.085052128 container start ca4dff4b40d7c5fc06cc9e680da488e5dc312875da825dd5b25473a1796fd76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_einstein, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 09:42:04 compute-0 podman[68378]: 2025-10-09 09:42:04.957639083 +0000 UTC m=+0.086516765 container attach ca4dff4b40d7c5fc06cc9e680da488e5dc312875da825dd5b25473a1796fd76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_einstein, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:42:04 compute-0 loving_einstein[68392]: 167 167
Oct  9 09:42:04 compute-0 systemd[1]: libpod-ca4dff4b40d7c5fc06cc9e680da488e5dc312875da825dd5b25473a1796fd76e.scope: Deactivated successfully.
Oct  9 09:42:04 compute-0 podman[68378]: 2025-10-09 09:42:04.960534181 +0000 UTC m=+0.089411862 container died ca4dff4b40d7c5fc06cc9e680da488e5dc312875da825dd5b25473a1796fd76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:42:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-85356196779af34baefc59b31541ad4dbaeeed5108168bdbf7ad525c489f5f39-merged.mount: Deactivated successfully.
Oct  9 09:42:04 compute-0 podman[68378]: 2025-10-09 09:42:04.977876346 +0000 UTC m=+0.106754027 container remove ca4dff4b40d7c5fc06cc9e680da488e5dc312875da825dd5b25473a1796fd76e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:42:04 compute-0 podman[68378]: 2025-10-09 09:42:04.886946087 +0000 UTC m=+0.015823788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:42:04 compute-0 systemd[1]: libpod-conmon-ca4dff4b40d7c5fc06cc9e680da488e5dc312875da825dd5b25473a1796fd76e.scope: Deactivated successfully.
Oct  9 09:42:05 compute-0 podman[68461]: 2025-10-09 09:42:05.096755101 +0000 UTC m=+0.032083557 container create 8336cf7089589522324af347229d6cbfc8fdc9d9481d1e62942d03de00023deb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:42:05 compute-0 systemd[1]: Started libpod-conmon-8336cf7089589522324af347229d6cbfc8fdc9d9481d1e62942d03de00023deb.scope.
Oct  9 09:42:05 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b3a9056514e7eb57b6ee1e9bb84c8c013c784d533d0bbe2e50d74641de318e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b3a9056514e7eb57b6ee1e9bb84c8c013c784d533d0bbe2e50d74641de318e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b3a9056514e7eb57b6ee1e9bb84c8c013c784d533d0bbe2e50d74641de318e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b3a9056514e7eb57b6ee1e9bb84c8c013c784d533d0bbe2e50d74641de318e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:05 compute-0 podman[68461]: 2025-10-09 09:42:05.147924924 +0000 UTC m=+0.083253400 container init 8336cf7089589522324af347229d6cbfc8fdc9d9481d1e62942d03de00023deb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euclid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:42:05 compute-0 podman[68461]: 2025-10-09 09:42:05.153893216 +0000 UTC m=+0.089221672 container start 8336cf7089589522324af347229d6cbfc8fdc9d9481d1e62942d03de00023deb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:42:05 compute-0 podman[68461]: 2025-10-09 09:42:05.155108953 +0000 UTC m=+0.090437429 container attach 8336cf7089589522324af347229d6cbfc8fdc9d9481d1e62942d03de00023deb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 09:42:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:05 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78800049e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:05 compute-0 podman[68461]: 2025-10-09 09:42:05.085840305 +0000 UTC m=+0.021168782 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:42:05 compute-0 python3.9[68554]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002924.5923946-713-255305925244301/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:05 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7858009580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:05 compute-0 optimistic_euclid[68514]: {}
Oct  9 09:42:05 compute-0 lvm[68708]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:42:05 compute-0 lvm[68708]: VG ceph_vg0 finished
Oct  9 09:42:05 compute-0 systemd[1]: libpod-8336cf7089589522324af347229d6cbfc8fdc9d9481d1e62942d03de00023deb.scope: Deactivated successfully.
Oct  9 09:42:05 compute-0 podman[68461]: 2025-10-09 09:42:05.643293992 +0000 UTC m=+0.578622448 container died 8336cf7089589522324af347229d6cbfc8fdc9d9481d1e62942d03de00023deb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 09:42:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-22b3a9056514e7eb57b6ee1e9bb84c8c013c784d533d0bbe2e50d74641de318e-merged.mount: Deactivated successfully.
Oct  9 09:42:05 compute-0 podman[68461]: 2025-10-09 09:42:05.665992201 +0000 UTC m=+0.601320656 container remove 8336cf7089589522324af347229d6cbfc8fdc9d9481d1e62942d03de00023deb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:42:05 compute-0 systemd[1]: libpod-conmon-8336cf7089589522324af347229d6cbfc8fdc9d9481d1e62942d03de00023deb.scope: Deactivated successfully.
Oct  9 09:42:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:42:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:42:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:05 compute-0 python3.9[68813]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:06.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:06 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7858009580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct  9 09:42:06 compute-0 python3.9[68967]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:06 compute-0 systemd[1]: session-3.scope: Deactivated successfully.
Oct  9 09:42:06 compute-0 systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Oct  9 09:42:06 compute-0 systemd[1]: session-3.scope: Consumed 1min 9.425s CPU time.
Oct  9 09:42:06 compute-0 systemd-logind[798]: Removed session 3.
Oct  9 09:42:06 compute-0 python3.9[69091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002926.049201-789-131260539031492/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:42:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:06.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:42:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:06.968Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:06.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:06.979Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:06.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:07 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7888002ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:07 compute-0 python3.9[69243]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:07 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78800054e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:07 compute-0 python3.9[69395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:08.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:08 compute-0 python3.9[69519]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002927.437206-861-221736127967705/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:08 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7858009580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  9 09:42:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:08 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:08 compute-0 python3.9[69672]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:08.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:09 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7850003310 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:09 compute-0 python3.9[69849]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:09 compute-0 python3.9[69972]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002928.8438299-931-179029482389578/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:09 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7850003310 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:10 compute-0 python3.9[70124]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:10.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:10 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78800054e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  9 09:42:10 compute-0 python3.9[70277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:10.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:10 compute-0 python3.9[70401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002930.2254431-999-158064902994865/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:11 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7858009580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:11 compute-0 python3.9[70553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:11 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7850003310 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:11 compute-0 python3.9[70705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:12.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:12 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7850003310 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:42:12] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:42:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:42:12] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:42:12 compute-0 python3.9[70829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002931.594296-1070-78369152795561/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:42:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:12.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:13 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7880006610 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:13 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Oct  9 09:42:13 compute-0 systemd[1]: session-32.scope: Consumed 15.967s CPU time.
Oct  9 09:42:13 compute-0 systemd-logind[798]: Session 32 logged out. Waiting for processes to exit.
Oct  9 09:42:13 compute-0 systemd-logind[798]: Removed session 32.
Oct  9 09:42:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:13 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7858009580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:14.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:14 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:14 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78880040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  9 09:42:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:14.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:15 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7850003310 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:15 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7880006790 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:16.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:16 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7858009580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct  9 09:42:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:16.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:16.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:16.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:16.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:16.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:17 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78880040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:17 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7850003310 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:42:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:18.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:42:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:18 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78800070b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  9 09:42:18 compute-0 systemd-logind[798]: New session 33 of user zuul.
Oct  9 09:42:18 compute-0 systemd[1]: Started Session 33 of User zuul.
Oct  9 09:42:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:18 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:42:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:18.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:42:19 compute-0 python3.9[71016]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:19 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7858009580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:42:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:42:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:42:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:42:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:19 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7888004dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:42:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:42:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:42:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:42:19 compute-0 python3.9[71168]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:20.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:20 compute-0 python3.9[71292]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002939.2620294-62-242449412937388/.source.conf _original_basename=ceph.conf follow=False checksum=8b7272e0630e6cb598e773121c6b56dda1c87bf8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:20 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7850003310 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  9 09:42:20 compute-0 python3.9[71445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:21 compute-0 python3.9[71568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002940.3698342-62-36583891641071/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f2b8c5d3158b549e18e5631f97d7800b8ceae49e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:21 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78800070b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:21 compute-0 systemd-logind[798]: Session 33 logged out. Waiting for processes to exit.
Oct  9 09:42:21 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Oct  9 09:42:21 compute-0 systemd[1]: session-33.scope: Consumed 1.843s CPU time.
Oct  9 09:42:21 compute-0 systemd-logind[798]: Removed session 33.
Oct  9 09:42:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:21 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7858009580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:42:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:22.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:22 compute-0 kernel: ganesha.nfsd[67291]: segfault at 50 ip 00007f790518232e sp 00007f78bd7f9210 error 4 in libntirpc.so.5.8[7f7905167000+2c000] likely on CPU 1 (core 0, socket 1)
Oct  9 09:42:22 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:42:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[49822]: 09/10/2025 09:42:22 : epoch 68e782fd : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7888004dd0 fd 39 proxy ignored for local
Oct  9 09:42:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:42:22] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:42:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:42:22] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:42:22 compute-0 systemd[1]: Started Process Core Dump (PID 71594/UID 0).
Oct  9 09:42:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:42:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:22.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:23 compute-0 systemd-coredump[71595]: Process 49826 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 64:#012#0  0x00007f790518232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:42:23 compute-0 systemd[1]: systemd-coredump@2-71594-0.service: Deactivated successfully.
Oct  9 09:42:23 compute-0 podman[71604]: 2025-10-09 09:42:23.273179953 +0000 UTC m=+0.018428066 container died 34ddefbaa38353d56824ee0d8eefa83a83c391e3440f34a73547b94299ee84b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f2698e4e2da84f057cb652aa7ff4f7e945f52fb4bd60ed5d22ff8a519c3d859-merged.mount: Deactivated successfully.
Oct  9 09:42:23 compute-0 podman[71604]: 2025-10-09 09:42:23.292809777 +0000 UTC m=+0.038057871 container remove 34ddefbaa38353d56824ee0d8eefa83a83c391e3440f34a73547b94299ee84b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:42:23 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.2.0.compute-0.rlqbpy.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:42:23 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.2.0.compute-0.rlqbpy.service: Failed with result 'exit-code'.
Oct  9 09:42:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:42:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:24.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:42:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  9 09:42:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:42:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:42:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:26.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct  9 09:42:26 compute-0 systemd-logind[798]: New session 34 of user zuul.
Oct  9 09:42:26 compute-0 systemd[1]: Started Session 34 of User zuul.
Oct  9 09:42:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:26.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:26.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:26.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:27 compute-0 python3.9[71793]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:42:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:28.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:28 compute-0 python3.9[71950]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [WARNING] 281/094228 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:42:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-0-ujrhwc[30455]: [ALERT] 281/094228 (4) : backend 'backend' has no server available!
Oct  9 09:42:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  9 09:42:28 compute-0 python3.9[72103]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:28.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:29 compute-0 python3.9[72278]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:42:30 compute-0 python3.9[72430]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  9 09:42:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:42:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:30.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:42:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct  9 09:42:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:30.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:31 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=2 res=1
Oct  9 09:42:31 compute-0 python3.9[72599]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:42:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:32.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:42:32] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:42:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:42:32] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:42:32 compute-0 python3.9[72684]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:42:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct  9 09:42:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:42:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:32.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:42:33 compute-0 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.2.0.compute-0.rlqbpy.service: Scheduled restart job, restart counter is at 3.
Oct  9 09:42:33 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.rlqbpy for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:42:33 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.rlqbpy for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:42:33 compute-0 podman[72804]: 2025-10-09 09:42:33.706549816 +0000 UTC m=+0.031364599 container create 217ee5710fb39dcff3e6e8fa0b8ba75104b7bad4fc42becb1070f2d0166a1a7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77697712b3aa00bb85a062189ed7a0fa1b18bea497aa39ca078764917801a52a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77697712b3aa00bb85a062189ed7a0fa1b18bea497aa39ca078764917801a52a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77697712b3aa00bb85a062189ed7a0fa1b18bea497aa39ca078764917801a52a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77697712b3aa00bb85a062189ed7a0fa1b18bea497aa39ca078764917801a52a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.rlqbpy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:42:33 compute-0 podman[72804]: 2025-10-09 09:42:33.74518236 +0000 UTC m=+0.069997154 container init 217ee5710fb39dcff3e6e8fa0b8ba75104b7bad4fc42becb1070f2d0166a1a7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:42:33 compute-0 podman[72804]: 2025-10-09 09:42:33.749201893 +0000 UTC m=+0.074016676 container start 217ee5710fb39dcff3e6e8fa0b8ba75104b7bad4fc42becb1070f2d0166a1a7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:42:33 compute-0 bash[72804]: 217ee5710fb39dcff3e6e8fa0b8ba75104b7bad4fc42becb1070f2d0166a1a7f
Oct  9 09:42:33 compute-0 podman[72804]: 2025-10-09 09:42:33.692724424 +0000 UTC m=+0.017539227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:42:33 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.rlqbpy for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:42:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:42:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:42:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:42:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:42:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:42:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:42:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:42:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:34 compute-0 python3.9[72930]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:42:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:34.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct  9 09:42:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:42:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:42:34 compute-0 python3[73087]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  9 09:42:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:34.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:35 compute-0 python3.9[73239]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:35 compute-0 python3.9[73391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:36.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:36 compute-0 python3.9[73470]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct  9 09:42:36 compute-0 python3.9[73623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:36.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:36.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:36.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:36.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:36.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:37 compute-0 python3.9[73701]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.kp1xwc_l recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:37 compute-0 python3.9[73853]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:37 compute-0 python3.9[73931]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:38.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct  9 09:42:38 compute-0 python3.9[74084]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:38.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:39 compute-0 python3[74238]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 09:42:39 compute-0 python3.9[74390]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:40.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct  9 09:42:40 compute-0 python3.9[74516]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002959.3740847-431-122161368675016/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:40 compute-0 python3.9[74669]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:42:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:40.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:42:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:41 compute-0 python3.9[74794]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002960.488421-476-32216381089477/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:41 compute-0 python3.9[74946]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:42.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:42:42] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Oct  9 09:42:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:42:42] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Oct  9 09:42:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct  9 09:42:42 compute-0 python3.9[75072]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002961.6158075-521-178820093736833/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:42.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:42 compute-0 python3.9[75225]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:43 compute-0 python3.9[75350]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002962.5617795-566-187009908294917/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:43 compute-0 python3.9[75502]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:44.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 170 B/s wr, 1 op/s
Oct  9 09:42:44 compute-0 python3.9[75628]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002963.5326107-611-206463204120542/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:45 compute-0 python3.9[75781]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:45 compute-0 python3.9[75933]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:46.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s
Oct  9 09:42:46 compute-0 python3.9[76089]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:46.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:46 compute-0 python3.9[76242]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:46.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:46.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:46.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:46.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:47 compute-0 python3.9[76395]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:42:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:48 compute-0 python3.9[76549]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:48.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct  9 09:42:48 compute-0 python3.9[76706]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:48.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:42:49
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['default.rgw.log', '.nfs', 'backups', 'default.rgw.control', 'default.rgw.meta', 'vms', '.mgr', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:42:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:42:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:42:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:42:49 compute-0 python3.9[76881]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:42:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:50.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct  9 09:42:50 compute-0 python3.9[77036]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:50 compute-0 ovs-vsctl[77037]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  9 09:42:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:50.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:51 compute-0 python3.9[77189]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:51 compute-0 python3.9[77344]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:51 compute-0 ovs-vsctl[77345]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  9 09:42:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:52.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:42:52] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:42:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:42:52] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:42:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 85 B/s wr, 1 op/s
Oct  9 09:42:52 compute-0 python3.9[77496]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:42:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:42:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:52.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:42:52 compute-0 python3.9[77651]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:53 compute-0 python3.9[77803]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:53 compute-0 python3.9[77881]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:54.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:42:54 compute-0 python3.9[78034]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:54 compute-0 python3.9[78113]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:54.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:55 compute-0 python3.9[78265]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:55 compute-0 python3.9[78417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:56 compute-0 python3.9[78496]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:56.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:42:56 compute-0 python3.9[78649]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:42:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:56.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:42:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:56.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:56.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:56.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:42:56.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:42:57 compute-0 python3.9[78727]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:57 compute-0 python3.9[78879]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:42:57 compute-0 systemd[1]: Reloading.
Oct  9 09:42:57 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:42:57 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:42:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:42:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:58.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:42:58 compute-0 python3.9[79069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:58 compute-0 python3.9[79148]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:42:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:58.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:42:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:42:59 compute-0 python3.9[79300]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:59 compute-0 python3.9[79378]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:00 compute-0 python3.9[79530]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:00 compute-0 systemd[1]: Reloading.
Oct  9 09:43:00 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:00 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:43:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:00.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:43:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:00 compute-0 systemd[1]: Starting Create netns directory...
Oct  9 09:43:00 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:43:00 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:43:00 compute-0 systemd[1]: Finished Create netns directory.
Oct  9 09:43:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:00.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:01 compute-0 python3.9[79724]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:01 compute-0 python3.9[79876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:01 compute-0 python3.9[79999]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002981.2234607-1364-156838635666267/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:02.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:02] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:43:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:02] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:43:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:02 compute-0 python3.9[80153]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:43:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:02.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:43:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:03 compute-0 python3.9[80305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:03 compute-0 python3.9[80428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002983.0157373-1439-162668682311288/.source.json _original_basename=.4nvelygv follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:04.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:04 compute-0 python3.9[80581]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:43:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:43:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:04.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:05 compute-0 python3.9[81009]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  9 09:43:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:06.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:43:06 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:43:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:43:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:43:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:43:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:43:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:43:06 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:43:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:43:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:43:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:43:06 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:43:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:43:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:43:06 compute-0 python3.9[81291]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:43:06 compute-0 podman[81323]: 2025-10-09 09:43:06.827247208 +0000 UTC m=+0.030025472 container create 2712032d123a84e8f2754f61e1b70f615087107391a597b075d6fb511e23bbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatterjee, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:43:06 compute-0 systemd[1]: Started libpod-conmon-2712032d123a84e8f2754f61e1b70f615087107391a597b075d6fb511e23bbd2.scope.
Oct  9 09:43:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:43:06 compute-0 podman[81323]: 2025-10-09 09:43:06.879547869 +0000 UTC m=+0.082326143 container init 2712032d123a84e8f2754f61e1b70f615087107391a597b075d6fb511e23bbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatterjee, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:43:06 compute-0 podman[81323]: 2025-10-09 09:43:06.884318288 +0000 UTC m=+0.087096552 container start 2712032d123a84e8f2754f61e1b70f615087107391a597b075d6fb511e23bbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatterjee, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:43:06 compute-0 podman[81323]: 2025-10-09 09:43:06.885610126 +0000 UTC m=+0.088388400 container attach 2712032d123a84e8f2754f61e1b70f615087107391a597b075d6fb511e23bbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 09:43:06 compute-0 dreamy_chatterjee[81360]: 167 167
Oct  9 09:43:06 compute-0 systemd[1]: libpod-2712032d123a84e8f2754f61e1b70f615087107391a597b075d6fb511e23bbd2.scope: Deactivated successfully.
Oct  9 09:43:06 compute-0 podman[81323]: 2025-10-09 09:43:06.888636755 +0000 UTC m=+0.091415018 container died 2712032d123a84e8f2754f61e1b70f615087107391a597b075d6fb511e23bbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eca40e504015bfbb2443b4d760281df85986e12d69a35dc620ebbfe9e56a140-merged.mount: Deactivated successfully.
Oct  9 09:43:06 compute-0 podman[81323]: 2025-10-09 09:43:06.907387085 +0000 UTC m=+0.110165349 container remove 2712032d123a84e8f2754f61e1b70f615087107391a597b075d6fb511e23bbd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:43:06 compute-0 podman[81323]: 2025-10-09 09:43:06.814129162 +0000 UTC m=+0.016907446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:43:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:06.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:06 compute-0 systemd[1]: libpod-conmon-2712032d123a84e8f2754f61e1b70f615087107391a597b075d6fb511e23bbd2.scope: Deactivated successfully.
Oct  9 09:43:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:06.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:06.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:06.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:06.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:07 compute-0 podman[81383]: 2025-10-09 09:43:07.024862334 +0000 UTC m=+0.030121392 container create 5903bb457b588e8543ad18568f54870645794ca7aa6017e08a73adfd4b46744d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 09:43:07 compute-0 systemd[1]: Started libpod-conmon-5903bb457b588e8543ad18568f54870645794ca7aa6017e08a73adfd4b46744d.scope.
Oct  9 09:43:07 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdb74d339d49e113db6e6af93431066aa6eaf88b84ae3d3da6c056a5ef9c749/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdb74d339d49e113db6e6af93431066aa6eaf88b84ae3d3da6c056a5ef9c749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdb74d339d49e113db6e6af93431066aa6eaf88b84ae3d3da6c056a5ef9c749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdb74d339d49e113db6e6af93431066aa6eaf88b84ae3d3da6c056a5ef9c749/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdb74d339d49e113db6e6af93431066aa6eaf88b84ae3d3da6c056a5ef9c749/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:07 compute-0 podman[81383]: 2025-10-09 09:43:07.083601169 +0000 UTC m=+0.088860249 container init 5903bb457b588e8543ad18568f54870645794ca7aa6017e08a73adfd4b46744d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:43:07 compute-0 podman[81383]: 2025-10-09 09:43:07.091769017 +0000 UTC m=+0.097028076 container start 5903bb457b588e8543ad18568f54870645794ca7aa6017e08a73adfd4b46744d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:43:07 compute-0 podman[81383]: 2025-10-09 09:43:07.093067758 +0000 UTC m=+0.098326816 container attach 5903bb457b588e8543ad18568f54870645794ca7aa6017e08a73adfd4b46744d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:43:07 compute-0 podman[81383]: 2025-10-09 09:43:07.012702966 +0000 UTC m=+0.017962045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:43:07 compute-0 festive_morse[81436]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:43:07 compute-0 festive_morse[81436]: --> All data devices are unavailable
Oct  9 09:43:07 compute-0 systemd[1]: libpod-5903bb457b588e8543ad18568f54870645794ca7aa6017e08a73adfd4b46744d.scope: Deactivated successfully.
Oct  9 09:43:07 compute-0 podman[81383]: 2025-10-09 09:43:07.354665692 +0000 UTC m=+0.359924751 container died 5903bb457b588e8543ad18568f54870645794ca7aa6017e08a73adfd4b46744d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fdb74d339d49e113db6e6af93431066aa6eaf88b84ae3d3da6c056a5ef9c749-merged.mount: Deactivated successfully.
Oct  9 09:43:07 compute-0 podman[81383]: 2025-10-09 09:43:07.381084528 +0000 UTC m=+0.386343587 container remove 5903bb457b588e8543ad18568f54870645794ca7aa6017e08a73adfd4b46744d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_morse, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:43:07 compute-0 systemd[1]: libpod-conmon-5903bb457b588e8543ad18568f54870645794ca7aa6017e08a73adfd4b46744d.scope: Deactivated successfully.
Oct  9 09:43:07 compute-0 python3.9[81548]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:43:07 compute-0 podman[81675]: 2025-10-09 09:43:07.792172724 +0000 UTC m=+0.029057596 container create a80a456efde210130bb809b86fe617bdf30050425fd9e4890a58c6158edfd6ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:43:07 compute-0 systemd[1]: Started libpod-conmon-a80a456efde210130bb809b86fe617bdf30050425fd9e4890a58c6158edfd6ec.scope.
Oct  9 09:43:07 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:43:07 compute-0 podman[81675]: 2025-10-09 09:43:07.843806487 +0000 UTC m=+0.080691359 container init a80a456efde210130bb809b86fe617bdf30050425fd9e4890a58c6158edfd6ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:43:07 compute-0 podman[81675]: 2025-10-09 09:43:07.84808657 +0000 UTC m=+0.084971433 container start a80a456efde210130bb809b86fe617bdf30050425fd9e4890a58c6158edfd6ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_bardeen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:43:07 compute-0 podman[81675]: 2025-10-09 09:43:07.849495789 +0000 UTC m=+0.086380661 container attach a80a456efde210130bb809b86fe617bdf30050425fd9e4890a58c6158edfd6ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_bardeen, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:43:07 compute-0 sleepy_bardeen[81687]: 167 167
Oct  9 09:43:07 compute-0 systemd[1]: libpod-a80a456efde210130bb809b86fe617bdf30050425fd9e4890a58c6158edfd6ec.scope: Deactivated successfully.
Oct  9 09:43:07 compute-0 conmon[81687]: conmon a80a456efde210130bb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a80a456efde210130bb809b86fe617bdf30050425fd9e4890a58c6158edfd6ec.scope/container/memory.events
Oct  9 09:43:07 compute-0 podman[81675]: 2025-10-09 09:43:07.851799544 +0000 UTC m=+0.088684407 container died a80a456efde210130bb809b86fe617bdf30050425fd9e4890a58c6158edfd6ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_bardeen, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-75715549711fc040d0783d8d374608b261233840d59d5ac9e6c1c824e4ca7119-merged.mount: Deactivated successfully.
Oct  9 09:43:07 compute-0 podman[81675]: 2025-10-09 09:43:07.875904297 +0000 UTC m=+0.112789159 container remove a80a456efde210130bb809b86fe617bdf30050425fd9e4890a58c6158edfd6ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:43:07 compute-0 podman[81675]: 2025-10-09 09:43:07.779843615 +0000 UTC m=+0.016728498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:43:07 compute-0 systemd[1]: libpod-conmon-a80a456efde210130bb809b86fe617bdf30050425fd9e4890a58c6158edfd6ec.scope: Deactivated successfully.
Oct  9 09:43:07 compute-0 podman[81710]: 2025-10-09 09:43:07.990288754 +0000 UTC m=+0.029790288 container create e458fd98728af3ba6148b3fe98970d3fd3c3b7e7998df8bc430ec22de664b754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 09:43:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:08 compute-0 systemd[1]: Started libpod-conmon-e458fd98728af3ba6148b3fe98970d3fd3c3b7e7998df8bc430ec22de664b754.scope.
Oct  9 09:43:08 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7758d8cfb334994a5f57f8c7bbde0c6d67862d7f45bef2079c4f91fad5d6c13e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7758d8cfb334994a5f57f8c7bbde0c6d67862d7f45bef2079c4f91fad5d6c13e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7758d8cfb334994a5f57f8c7bbde0c6d67862d7f45bef2079c4f91fad5d6c13e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7758d8cfb334994a5f57f8c7bbde0c6d67862d7f45bef2079c4f91fad5d6c13e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:08 compute-0 podman[81710]: 2025-10-09 09:43:08.043513628 +0000 UTC m=+0.083015162 container init e458fd98728af3ba6148b3fe98970d3fd3c3b7e7998df8bc430ec22de664b754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:43:08 compute-0 podman[81710]: 2025-10-09 09:43:08.048018135 +0000 UTC m=+0.087519669 container start e458fd98728af3ba6148b3fe98970d3fd3c3b7e7998df8bc430ec22de664b754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:43:08 compute-0 podman[81710]: 2025-10-09 09:43:08.049252764 +0000 UTC m=+0.088754298 container attach e458fd98728af3ba6148b3fe98970d3fd3c3b7e7998df8bc430ec22de664b754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_mclean, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:43:08 compute-0 podman[81710]: 2025-10-09 09:43:07.979353846 +0000 UTC m=+0.018855400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:43:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:08.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:08 compute-0 reverent_mclean[81724]: {
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:    "1": [
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:        {
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "devices": [
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "/dev/loop3"
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            ],
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "lv_name": "ceph_lv0",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "lv_size": "21470642176",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "name": "ceph_lv0",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "tags": {
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.cluster_name": "ceph",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.crush_device_class": "",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.encrypted": "0",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.osd_id": "1",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.type": "block",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.vdo": "0",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:                "ceph.with_tpm": "0"
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            },
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "type": "block",
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:            "vg_name": "ceph_vg0"
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:        }
Oct  9 09:43:08 compute-0 reverent_mclean[81724]:    ]
Oct  9 09:43:08 compute-0 reverent_mclean[81724]: }
Oct  9 09:43:08 compute-0 systemd[1]: libpod-e458fd98728af3ba6148b3fe98970d3fd3c3b7e7998df8bc430ec22de664b754.scope: Deactivated successfully.
Oct  9 09:43:08 compute-0 podman[81710]: 2025-10-09 09:43:08.27888106 +0000 UTC m=+0.318382593 container died e458fd98728af3ba6148b3fe98970d3fd3c3b7e7998df8bc430ec22de664b754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_mclean, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:43:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7758d8cfb334994a5f57f8c7bbde0c6d67862d7f45bef2079c4f91fad5d6c13e-merged.mount: Deactivated successfully.
Oct  9 09:43:08 compute-0 podman[81710]: 2025-10-09 09:43:08.305098427 +0000 UTC m=+0.344599961 container remove e458fd98728af3ba6148b3fe98970d3fd3c3b7e7998df8bc430ec22de664b754 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_mclean, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:43:08 compute-0 systemd[1]: libpod-conmon-e458fd98728af3ba6148b3fe98970d3fd3c3b7e7998df8bc430ec22de664b754.scope: Deactivated successfully.
Oct  9 09:43:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:08 compute-0 podman[81904]: 2025-10-09 09:43:08.722394393 +0000 UTC m=+0.032799915 container create c03e230a422d148b81864f7307155a62aae9443f65e4e3b5484c9eec7318b6b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:43:08 compute-0 systemd[1]: Started libpod-conmon-c03e230a422d148b81864f7307155a62aae9443f65e4e3b5484c9eec7318b6b5.scope.
Oct  9 09:43:08 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:43:08 compute-0 podman[81904]: 2025-10-09 09:43:08.76579788 +0000 UTC m=+0.076203412 container init c03e230a422d148b81864f7307155a62aae9443f65e4e3b5484c9eec7318b6b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:43:08 compute-0 podman[81904]: 2025-10-09 09:43:08.771296022 +0000 UTC m=+0.081701535 container start c03e230a422d148b81864f7307155a62aae9443f65e4e3b5484c9eec7318b6b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_maxwell, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:43:08 compute-0 podman[81904]: 2025-10-09 09:43:08.772512396 +0000 UTC m=+0.082917908 container attach c03e230a422d148b81864f7307155a62aae9443f65e4e3b5484c9eec7318b6b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:43:08 compute-0 nervous_maxwell[81959]: 167 167
Oct  9 09:43:08 compute-0 systemd[1]: libpod-c03e230a422d148b81864f7307155a62aae9443f65e4e3b5484c9eec7318b6b5.scope: Deactivated successfully.
Oct  9 09:43:08 compute-0 podman[81904]: 2025-10-09 09:43:08.775652801 +0000 UTC m=+0.086058313 container died c03e230a422d148b81864f7307155a62aae9443f65e4e3b5484c9eec7318b6b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_maxwell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:43:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-10a73b6757e70b35562ede99bd1b940b7bd63a94e491354f81571536d2b15e63-merged.mount: Deactivated successfully.
Oct  9 09:43:08 compute-0 podman[81904]: 2025-10-09 09:43:08.794353298 +0000 UTC m=+0.104758809 container remove c03e230a422d148b81864f7307155a62aae9443f65e4e3b5484c9eec7318b6b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_maxwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:43:08 compute-0 podman[81904]: 2025-10-09 09:43:08.708627954 +0000 UTC m=+0.019033466 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:43:08 compute-0 systemd[1]: libpod-conmon-c03e230a422d148b81864f7307155a62aae9443f65e4e3b5484c9eec7318b6b5.scope: Deactivated successfully.
Oct  9 09:43:08 compute-0 podman[81988]: 2025-10-09 09:43:08.908008209 +0000 UTC m=+0.027553649 container create 306ffa6ef1d0b0f7dbf7c401d726ba20d7658639d945e0dd8f49eb13ace950d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:43:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:08.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:08 compute-0 systemd[1]: Started libpod-conmon-306ffa6ef1d0b0f7dbf7c401d726ba20d7658639d945e0dd8f49eb13ace950d7.scope.
Oct  9 09:43:08 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7dbe62a12592537440bb55b6a088d4f6a62020f48f2113333d173596547d703/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7dbe62a12592537440bb55b6a088d4f6a62020f48f2113333d173596547d703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7dbe62a12592537440bb55b6a088d4f6a62020f48f2113333d173596547d703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7dbe62a12592537440bb55b6a088d4f6a62020f48f2113333d173596547d703/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:08 compute-0 podman[81988]: 2025-10-09 09:43:08.961557064 +0000 UTC m=+0.081102504 container init 306ffa6ef1d0b0f7dbf7c401d726ba20d7658639d945e0dd8f49eb13ace950d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct  9 09:43:08 compute-0 podman[81988]: 2025-10-09 09:43:08.966457058 +0000 UTC m=+0.086002498 container start 306ffa6ef1d0b0f7dbf7c401d726ba20d7658639d945e0dd8f49eb13ace950d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:43:08 compute-0 podman[81988]: 2025-10-09 09:43:08.968089056 +0000 UTC m=+0.087634496 container attach 306ffa6ef1d0b0f7dbf7c401d726ba20d7658639d945e0dd8f49eb13ace950d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:43:08 compute-0 podman[81988]: 2025-10-09 09:43:08.896661604 +0000 UTC m=+0.016207064 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:43:08 compute-0 python3[81969]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:43:09 compute-0 lvm[82123]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:43:09 compute-0 lvm[82123]: VG ceph_vg0 finished
Oct  9 09:43:09 compute-0 objective_curie[82002]: {}
Oct  9 09:43:09 compute-0 systemd[1]: libpod-306ffa6ef1d0b0f7dbf7c401d726ba20d7658639d945e0dd8f49eb13ace950d7.scope: Deactivated successfully.
Oct  9 09:43:09 compute-0 podman[81988]: 2025-10-09 09:43:09.454595873 +0000 UTC m=+0.574141312 container died 306ffa6ef1d0b0f7dbf7c401d726ba20d7658639d945e0dd8f49eb13ace950d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:43:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7dbe62a12592537440bb55b6a088d4f6a62020f48f2113333d173596547d703-merged.mount: Deactivated successfully.
Oct  9 09:43:09 compute-0 podman[81988]: 2025-10-09 09:43:09.474730897 +0000 UTC m=+0.594276336 container remove 306ffa6ef1d0b0f7dbf7c401d726ba20d7658639d945e0dd8f49eb13ace950d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_curie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:43:09 compute-0 systemd[1]: libpod-conmon-306ffa6ef1d0b0f7dbf7c401d726ba20d7658639d945e0dd8f49eb13ace950d7.scope: Deactivated successfully.
Oct  9 09:43:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:43:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:43:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:09 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:09 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:43:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:10.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:43:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:43:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:10.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:43:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.121985) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991122019, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2080, "num_deletes": 250, "total_data_size": 4133175, "memory_usage": 4191016, "flush_reason": "Manual Compaction"}
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991127185, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2417462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10683, "largest_seqno": 12761, "table_properties": {"data_size": 2410776, "index_size": 3500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17020, "raw_average_key_size": 20, "raw_value_size": 2395934, "raw_average_value_size": 2852, "num_data_blocks": 156, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002784, "oldest_key_time": 1760002784, "file_creation_time": 1760002991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 5275 microseconds, and 3930 cpu microseconds.
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.127260) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2417462 bytes OK
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.127300) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.127657) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.127670) EVENT_LOG_v1 {"time_micros": 1760002991127666, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.127680) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4124809, prev total WAL file size 4124809, number of live WAL files 2.
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.128536) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2360KB)], [26(13MB)]
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991128584, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16869006, "oldest_snapshot_seqno": -1}
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4395 keys, 14823701 bytes, temperature: kUnknown
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991155858, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14823701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14789543, "index_size": 22080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 110391, "raw_average_key_size": 25, "raw_value_size": 14704674, "raw_average_value_size": 3345, "num_data_blocks": 954, "num_entries": 4395, "num_filter_entries": 4395, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760002991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.156008) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14823701 bytes
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.166168) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 617.7 rd, 542.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 13.8 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(13.1) write-amplify(6.1) OK, records in: 4816, records dropped: 421 output_compression: NoCompression
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.166187) EVENT_LOG_v1 {"time_micros": 1760002991166179, "job": 10, "event": "compaction_finished", "compaction_time_micros": 27311, "compaction_time_cpu_micros": 19645, "output_level": 6, "num_output_files": 1, "total_output_size": 14823701, "num_input_records": 4816, "num_output_records": 4395, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991166509, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991167942, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.128487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.167983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.167986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.167988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.167989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:11.167990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:12.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:12] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct  9 09:43:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:12] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct  9 09:43:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:43:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:12.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:43:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:13 compute-0 podman[82029]: 2025-10-09 09:43:13.831936158 +0000 UTC m=+4.791192846 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  9 09:43:13 compute-0 podman[82253]: 2025-10-09 09:43:13.928067042 +0000 UTC m=+0.029191578 container create 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  9 09:43:13 compute-0 podman[82253]: 2025-10-09 09:43:13.914790337 +0000 UTC m=+0.015914893 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  9 09:43:13 compute-0 python3[81969]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  9 09:43:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:14.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.433727) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994433796, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 284, "num_deletes": 251, "total_data_size": 80836, "memory_usage": 86296, "flush_reason": "Manual Compaction"}
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994435081, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 80683, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12762, "largest_seqno": 13045, "table_properties": {"data_size": 78773, "index_size": 138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4668, "raw_average_key_size": 17, "raw_value_size": 75088, "raw_average_value_size": 279, "num_data_blocks": 6, "num_entries": 269, "num_filter_entries": 269, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002992, "oldest_key_time": 1760002992, "file_creation_time": 1760002994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 1402 microseconds, and 989 cpu microseconds.
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435132) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 80683 bytes OK
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435164) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435650) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435662) EVENT_LOG_v1 {"time_micros": 1760002994435659, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435678) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 78717, prev total WAL file size 78717, number of live WAL files 2.
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.436065) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(78KB)], [29(14MB)]
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994436128, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 14904384, "oldest_snapshot_seqno": -1}
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4155 keys, 11557890 bytes, temperature: kUnknown
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994465300, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 11557890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11527009, "index_size": 19379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 106342, "raw_average_key_size": 25, "raw_value_size": 11447997, "raw_average_value_size": 2755, "num_data_blocks": 828, "num_entries": 4155, "num_filter_entries": 4155, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760002994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.465590) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11557890 bytes
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.466086) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 508.6 rd, 394.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 14.1 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(328.0) write-amplify(143.3) OK, records in: 4664, records dropped: 509 output_compression: NoCompression
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.466104) EVENT_LOG_v1 {"time_micros": 1760002994466095, "job": 12, "event": "compaction_finished", "compaction_time_micros": 29302, "compaction_time_cpu_micros": 18220, "output_level": 6, "num_output_files": 1, "total_output_size": 11557890, "num_input_records": 4664, "num_output_records": 4155, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994466617, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994468302, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.468441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.468446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.468447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.468448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:43:14.468450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-0 python3.9[82433]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:43:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:14.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:15 compute-0 python3.9[82587]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:15 compute-0 python3.9[82663]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:43:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:43:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:16.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:43:16 compute-0 python3.9[82815]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002995.7982152-1703-67343268591010/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:16 compute-0 python3.9[82891]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:43:16 compute-0 systemd[1]: Reloading.
Oct  9 09:43:16 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:16 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:16.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:16.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:16.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:16.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:16.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:17 compute-0 python3.9[83003]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:17 compute-0 systemd[1]: Reloading.
Oct  9 09:43:17 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:17 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:17 compute-0 systemd[1]: Starting ovn_controller container...
Oct  9 09:43:17 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f743dd4e6b4a981c69f85c4b9ed5cf2500377174e4fbe09893d6c7510fc08d8e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:17 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962.
Oct  9 09:43:17 compute-0 podman[83044]: 2025-10-09 09:43:17.709112354 +0000 UTC m=+0.082251031 container init 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:43:17 compute-0 ovn_controller[83056]: + sudo -E kolla_set_configs
Oct  9 09:43:17 compute-0 podman[83044]: 2025-10-09 09:43:17.727912349 +0000 UTC m=+0.101051015 container start 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct  9 09:43:17 compute-0 edpm-start-podman-container[83044]: ovn_controller
Oct  9 09:43:17 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct  9 09:43:17 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  9 09:43:17 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  9 09:43:17 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct  9 09:43:17 compute-0 edpm-start-podman-container[83043]: Creating additional drop-in dependency for "ovn_controller" (0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962)
Oct  9 09:43:17 compute-0 podman[83063]: 2025-10-09 09:43:17.78370095 +0000 UTC m=+0.047653325 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:43:17 compute-0 systemd[1]: 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962-47f9f66506192c29.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:43:17 compute-0 systemd[1]: 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962-47f9f66506192c29.service: Failed with result 'exit-code'.
Oct  9 09:43:17 compute-0 systemd[1]: Reloading.
Oct  9 09:43:17 compute-0 systemd[83088]: Queued start job for default target Main User Target.
Oct  9 09:43:17 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:17 compute-0 systemd[83088]: Created slice User Application Slice.
Oct  9 09:43:17 compute-0 systemd[83088]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  9 09:43:17 compute-0 systemd[83088]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:43:17 compute-0 systemd[83088]: Reached target Paths.
Oct  9 09:43:17 compute-0 systemd[83088]: Reached target Timers.
Oct  9 09:43:17 compute-0 systemd[83088]: Starting D-Bus User Message Bus Socket...
Oct  9 09:43:17 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:17 compute-0 systemd[83088]: Starting Create User's Volatile Files and Directories...
Oct  9 09:43:17 compute-0 systemd[83088]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:43:17 compute-0 systemd[83088]: Reached target Sockets.
Oct  9 09:43:17 compute-0 systemd[83088]: Finished Create User's Volatile Files and Directories.
Oct  9 09:43:17 compute-0 systemd[83088]: Reached target Basic System.
Oct  9 09:43:17 compute-0 systemd[83088]: Reached target Main User Target.
Oct  9 09:43:17 compute-0 systemd[83088]: Startup finished in 100ms.
Oct  9 09:43:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:18 compute-0 systemd[1]: Started User Manager for UID 0.
Oct  9 09:43:18 compute-0 systemd[1]: Started ovn_controller container.
Oct  9 09:43:18 compute-0 systemd[1]: Started Session c1 of User root.
Oct  9 09:43:18 compute-0 ovn_controller[83056]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:43:18 compute-0 ovn_controller[83056]: INFO:__main__:Validating config file
Oct  9 09:43:18 compute-0 ovn_controller[83056]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:43:18 compute-0 ovn_controller[83056]: INFO:__main__:Writing out command to execute
Oct  9 09:43:18 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  9 09:43:18 compute-0 ovn_controller[83056]: ++ cat /run_command
Oct  9 09:43:18 compute-0 ovn_controller[83056]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  9 09:43:18 compute-0 ovn_controller[83056]: + ARGS=
Oct  9 09:43:18 compute-0 ovn_controller[83056]: + sudo kolla_copy_cacerts
Oct  9 09:43:18 compute-0 systemd[1]: Started Session c2 of User root.
Oct  9 09:43:18 compute-0 ovn_controller[83056]: + [[ ! -n '' ]]
Oct  9 09:43:18 compute-0 ovn_controller[83056]: + . kolla_extend_start
Oct  9 09:43:18 compute-0 ovn_controller[83056]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  9 09:43:18 compute-0 ovn_controller[83056]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  9 09:43:18 compute-0 ovn_controller[83056]: + umask 0022
Oct  9 09:43:18 compute-0 ovn_controller[83056]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  9 09:43:18 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  9 09:43:18 compute-0 NetworkManager[982]: <info>  [1760002998.1358] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  9 09:43:18 compute-0 NetworkManager[982]: <info>  [1760002998.1363] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:43:18 compute-0 NetworkManager[982]: <info>  [1760002998.1373] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  9 09:43:18 compute-0 NetworkManager[982]: <info>  [1760002998.1379] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  9 09:43:18 compute-0 NetworkManager[982]: <info>  [1760002998.1382] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:43:18 compute-0 kernel: br-int: entered promiscuous mode
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-0 ovn_controller[83056]: 2025-10-09T09:43:18Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-0 NetworkManager[982]: <info>  [1760002998.1557] manager: (ovn-fc69d3-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  9 09:43:18 compute-0 systemd-udevd[83186]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:43:18 compute-0 NetworkManager[982]: <info>  [1760002998.1616] manager: (ovn-c24bec-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct  9 09:43:18 compute-0 NetworkManager[982]: <info>  [1760002998.1664] manager: (ovn-1479fb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct  9 09:43:18 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct  9 09:43:18 compute-0 systemd-udevd[83192]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:43:18 compute-0 NetworkManager[982]: <info>  [1760002998.1740] device (genev_sys_6081): carrier: link connected
Oct  9 09:43:18 compute-0 NetworkManager[982]: <info>  [1760002998.1744] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Oct  9 09:43:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:18.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:18 compute-0 python3.9[83317]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:43:18 compute-0 ovs-vsctl[83319]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  9 09:43:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:18.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:19 compute-0 python3.9[83471]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:43:19 compute-0 ovs-vsctl[83473]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  9 09:43:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:43:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:43:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:43:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:43:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:43:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:43:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:43:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:43:19 compute-0 python3.9[83626]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:43:19 compute-0 ovs-vsctl[83627]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  9 09:43:20 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Oct  9 09:43:20 compute-0 systemd[1]: session-34.scope: Consumed 42.343s CPU time.
Oct  9 09:43:20 compute-0 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Oct  9 09:43:20 compute-0 systemd-logind[798]: Removed session 34.
Oct  9 09:43:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:43:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:20.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:43:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:20.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:43:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:22.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:43:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:22] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:43:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:22] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:43:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:22.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:24.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:24 compute-0 systemd-logind[798]: New session 36 of user zuul.
Oct  9 09:43:24 compute-0 systemd[1]: Started Session 36 of User zuul.
Oct  9 09:43:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:24.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:25 compute-0 python3.9[83811]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:43:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:26.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:26 compute-0 python3.9[83969]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:26.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:26.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:26.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:26.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:26.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:27 compute-0 python3.9[84121]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:27 compute-0 python3.9[84273]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:28 compute-0 python3.9[84425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:28.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:28 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct  9 09:43:28 compute-0 systemd[83088]: Activating special unit Exit the Session...
Oct  9 09:43:28 compute-0 systemd[83088]: Stopped target Main User Target.
Oct  9 09:43:28 compute-0 systemd[83088]: Stopped target Basic System.
Oct  9 09:43:28 compute-0 systemd[83088]: Stopped target Paths.
Oct  9 09:43:28 compute-0 systemd[83088]: Stopped target Sockets.
Oct  9 09:43:28 compute-0 systemd[83088]: Stopped target Timers.
Oct  9 09:43:28 compute-0 systemd[83088]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:43:28 compute-0 systemd[83088]: Closed D-Bus User Message Bus Socket.
Oct  9 09:43:28 compute-0 systemd[83088]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:43:28 compute-0 systemd[83088]: Removed slice User Application Slice.
Oct  9 09:43:28 compute-0 systemd[83088]: Reached target Shutdown.
Oct  9 09:43:28 compute-0 systemd[83088]: Finished Exit the Session.
Oct  9 09:43:28 compute-0 systemd[83088]: Reached target Exit the Session.
Oct  9 09:43:28 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct  9 09:43:28 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct  9 09:43:28 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  9 09:43:28 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  9 09:43:28 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  9 09:43:28 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  9 09:43:28 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct  9 09:43:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:28 compute-0 python3.9[84579]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:43:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:28.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:43:29 compute-0 python3.9[84730]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:43:29 compute-0 python3.9[84907]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  9 09:43:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:30.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:30 compute-0 python3.9[85059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:30.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:31 compute-0 python3.9[85180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003010.4071124-218-249483837467923/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:31 compute-0 python3.9[85330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:32 compute-0 python3.9[85452]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003011.5555413-263-115337179535886/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:43:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:32.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:43:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:32] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:43:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:32] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:43:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:32.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:32 compute-0 python3.9[85605]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:43:33 compute-0 python3.9[85689]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:43:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:34.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:43:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:43:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:34.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:35 compute-0 python3.9[85844]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:43:35 compute-0 python3.9[85997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:36.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:36 compute-0 python3.9[86120]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003015.6519449-374-164042127209348/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:36 compute-0 python3.9[86271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:36.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:36.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:36.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:36.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:36.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:37 compute-0 python3.9[86392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003016.5158868-374-222166213894465/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:38 compute-0 python3.9[86543]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:38.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:38 compute-0 python3.9[86665]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003017.9545288-506-121988199847195/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:43:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:38.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:43:39 compute-0 python3.9[86815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:39 compute-0 python3.9[86936]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003018.7293365-506-223218218688967/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:40 compute-0 python3.9[87086]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:43:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:40.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:40 compute-0 python3.9[87241]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:40.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:43:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2796 writes, 13K keys, 2796 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2796 writes, 2796 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2796 writes, 13K keys, 2796 commit groups, 1.0 writes per commit group, ingest: 24.91 MB, 0.04 MB/s#012Interval WAL: 2796 writes, 2796 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    389.5      0.05              0.04         6    0.009       0      0       0.0       0.0#012  L6      1/0   11.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0    561.3    484.8      0.13              0.09         5    0.026     19K   2270       0.0       0.0#012 Sum      1/0   11.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    397.3    456.9      0.18              0.13        11    0.017     19K   2270       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    400.7    460.6      0.18              0.13        10    0.018     19K   2270       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    561.3    484.8      0.13              0.09         5    0.026     19K   2270       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    400.1      0.05              0.04         5    0.010       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.020#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557b3d66b350#2 capacity: 304.00 MB usage: 2.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(175,2.13 MB,0.701091%) FilterBlock(12,66.73 KB,0.0214376%) IndexBlock(12,134.50 KB,0.0432065%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 09:43:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:41 compute-0 python3.9[87394]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:41 compute-0 python3.9[87472]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:41 compute-0 python3.9[87624]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:42 compute-0 python3.9[87703]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:42.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:42] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct  9 09:43:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:42] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Oct  9 09:43:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:42 compute-0 python3.9[87857]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:42.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:43 compute-0 python3.9[88009]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:43 compute-0 python3.9[88087]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:44 compute-0 python3.9[88239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:44.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:44 compute-0 python3.9[88318]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:44.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:45 compute-0 python3.9[88471]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:45 compute-0 systemd[1]: Reloading.
Oct  9 09:43:45 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:45 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:45 compute-0 python3.9[88660]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:46 compute-0 python3.9[88738]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:46 compute-0 python3.9[88891]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:46 compute-0 python3.9[88970]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:46.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:46.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:46.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:46.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:46.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:47 compute-0 python3.9[89122]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:47 compute-0 systemd[1]: Reloading.
Oct  9 09:43:47 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:47 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:47 compute-0 systemd[1]: Starting Create netns directory...
Oct  9 09:43:47 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:43:47 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:43:47 compute-0 systemd[1]: Finished Create netns directory.
Oct  9 09:43:48 compute-0 ovn_controller[83056]: 2025-10-09T09:43:48Z|00025|memory|INFO|16128 kB peak resident set size after 30.1 seconds
Oct  9 09:43:48 compute-0 ovn_controller[83056]: 2025-10-09T09:43:48Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Oct  9 09:43:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:48.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:48 compute-0 podman[89288]: 2025-10-09 09:43:48.319985692 +0000 UTC m=+0.097059506 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct  9 09:43:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:48 compute-0 python3.9[89333]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:48 compute-0 python3.9[89492]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:48.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:49 compute-0 python3.9[89640]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003028.5690978-959-24580126715766/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:43:49
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'volumes', 'images', '.nfs', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root']
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:43:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:43:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:43:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:43:50 compute-0 python3.9[89792]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:43:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:50.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:43:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:50 compute-0 python3.9[89945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:50 compute-0 python3.9[90069]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003030.1955197-1034-4838347308971/.source.json _original_basename=.3048rt_i follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:50.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:51 compute-0 python3.9[90221]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:52] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:43:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:43:52] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:43:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:52.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:43:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:52.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:43:53 compute-0 python3.9[90650]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  9 09:43:53 compute-0 python3.9[90802]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:43:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:43:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:54.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:43:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:54 compute-0 python3.9[90956]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:43:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:43:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:54.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:43:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:43:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:56 compute-0 python3[91127]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:43:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:56.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:43:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:56.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:56.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:56.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:56.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:43:56.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:43:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:58.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:43:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:43:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:58.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:43:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:44:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:00.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:00.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:02] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:44:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:02] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:44:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:44:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:02.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:44:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:44:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:02.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:04.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:44:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:44:04 compute-0 podman[91140]: 2025-10-09 09:44:04.881556418 +0000 UTC m=+8.726772262 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:44:04 compute-0 podman[91246]: 2025-10-09 09:44:04.970745529 +0000 UTC m=+0.028723536 container create 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:44:04 compute-0 podman[91246]: 2025-10-09 09:44:04.957825918 +0000 UTC m=+0.015803945 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:44:04 compute-0 python3[91127]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:44:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:04.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:05 compute-0 python3.9[91426]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:44:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:06 compute-0 python3.9[91580]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:06.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:44:06 compute-0 python3.9[91657]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:44:06 compute-0 python3.9[91809]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003046.4512775-1298-264412108528060/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:06.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:06.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:06.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:06.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:06.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:07 compute-0 python3.9[91885]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:44:07 compute-0 systemd[1]: Reloading.
Oct  9 09:44:07 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:07 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:08 compute-0 python3.9[91995]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:08 compute-0 systemd[1]: Reloading.
Oct  9 09:44:08 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:08 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:08.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:08 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct  9 09:44:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:08 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53d7e7a5a9e463dc4bb69bfa74adb914c5f25d4ffeea3b53891f20a3bf8f016c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53d7e7a5a9e463dc4bb69bfa74adb914c5f25d4ffeea3b53891f20a3bf8f016c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626.
Oct  9 09:44:08 compute-0 podman[92036]: 2025-10-09 09:44:08.435821482 +0000 UTC m=+0.089630264 container init 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: + sudo -E kolla_set_configs
Oct  9 09:44:08 compute-0 podman[92036]: 2025-10-09 09:44:08.455736784 +0000 UTC m=+0.109545577 container start 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  9 09:44:08 compute-0 edpm-start-podman-container[92036]: ovn_metadata_agent
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Validating config file
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Copying service configuration files
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Writing out command to execute
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: ++ cat /run_command
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: + CMD=neutron-ovn-metadata-agent
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: + ARGS=
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: + sudo kolla_copy_cacerts
Oct  9 09:44:08 compute-0 edpm-start-podman-container[92035]: Creating additional drop-in dependency for "ovn_metadata_agent" (87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626)
Oct  9 09:44:08 compute-0 podman[92055]: 2025-10-09 09:44:08.51529685 +0000 UTC m=+0.047795656 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: + [[ ! -n '' ]]
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: + . kolla_extend_start
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: Running command: 'neutron-ovn-metadata-agent'
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: + umask 0022
Oct  9 09:44:08 compute-0 ovn_metadata_agent[92048]: + exec neutron-ovn-metadata-agent
Oct  9 09:44:08 compute-0 systemd[1]: Reloading.
Oct  9 09:44:08 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:08 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:08 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct  9 09:44:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:08.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:09 compute-0 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Oct  9 09:44:09 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Oct  9 09:44:09 compute-0 systemd[1]: session-36.scope: Consumed 40.490s CPU time.
Oct  9 09:44:09 compute-0 systemd-logind[798]: Removed session 36.
Oct  9 09:44:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct  9 09:44:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.046 92053 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.046 92053 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.046 92053 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.046 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.047 92053 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.047 92053 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.047 92053 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.047 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.047 92053 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.047 92053 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.047 92053 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.048 92053 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.048 92053 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.048 92053 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.048 92053 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.048 92053 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.048 92053 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.049 92053 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.051 92053 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.051 92053 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.051 92053 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.051 92053 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.052 92053 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.052 92053 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.052 92053 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.052 92053 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.052 92053 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.052 92053 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.052 92053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.052 92053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.053 92053 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.053 92053 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.053 92053 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.053 92053 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.053 92053 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.053 92053 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.053 92053 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.053 92053 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.053 92053 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.054 92053 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.054 92053 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.054 92053 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.054 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.054 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.054 92053 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.054 92053 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.054 92053 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.054 92053 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.055 92053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.055 92053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.055 92053 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.055 92053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.055 92053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.055 92053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.055 92053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.055 92053 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.055 92053 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.055 92053 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.056 92053 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.056 92053 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.056 92053 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.056 92053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.056 92053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.056 92053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.056 92053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.057 92053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.057 92053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.057 92053 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.057 92053 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.057 92053 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.057 92053 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.057 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.057 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.057 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.058 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.058 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.058 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.058 92053 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.058 92053 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.058 92053 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.058 92053 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.058 92053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.058 92053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.059 92053 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.059 92053 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.059 92053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.059 92053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.059 92053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.059 92053 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.059 92053 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.059 92053 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.059 92053 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.060 92053 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.061 92053 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.061 92053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.061 92053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.061 92053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.061 92053 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.061 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.061 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.061 92053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.062 92053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.062 92053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.063 92053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.063 92053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.064 92053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.064 92053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.064 92053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.064 92053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.064 92053 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.064 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.064 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.064 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.065 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.065 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.065 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.065 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.065 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.065 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.065 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.065 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.065 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.066 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.066 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.067 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.067 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.067 92053 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.067 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.067 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.067 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.067 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.067 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.068 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.068 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.068 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.068 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.068 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.068 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.068 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.068 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.068 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.069 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.069 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.069 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.069 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.069 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.069 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.069 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.069 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.069 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.069 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.070 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.070 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.070 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.070 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.070 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.070 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.070 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.070 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.071 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.071 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.071 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.071 92053 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.071 92053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.071 92053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.071 92053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.071 92053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.071 92053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.072 92053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.072 92053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.072 92053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.072 92053 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.072 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.072 92053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.072 92053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.072 92053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.072 92053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.073 92053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.073 92053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.073 92053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.073 92053 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.073 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.073 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.073 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.073 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.073 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.073 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.074 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.074 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.074 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.074 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.074 92053 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.074 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.074 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.074 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.074 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.075 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.075 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.075 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.075 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.075 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.075 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.075 92053 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.075 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.075 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.076 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.076 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.076 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.076 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.076 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.076 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.076 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.076 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.076 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.077 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.077 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.077 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.077 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.077 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.077 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.077 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.077 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.077 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.077 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.078 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.078 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.078 92053 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.078 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.078 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.078 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.078 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.078 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.078 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.079 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.079 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.079 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.079 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.079 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.079 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.079 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.079 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.079 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.080 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.080 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.080 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.080 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.080 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.080 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.080 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.080 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.080 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.081 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.081 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.081 92053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.081 92053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.081 92053 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.081 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.081 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.081 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.081 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.081 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.082 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.082 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.082 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.082 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.082 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.082 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.082 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.082 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.082 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.083 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.083 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.083 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.083 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.083 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.083 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.084 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.084 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.084 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.084 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.084 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.084 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.084 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.084 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.084 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.085 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.085 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.085 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.085 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.085 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.085 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.085 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.085 92053 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.085 92053 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.093 92053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.093 92053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.093 92053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.094 92053 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.094 92053 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.105 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name ef217152-08e8-40c8-a663-3565c5b77d4a (UUID: ef217152-08e8-40c8-a663-3565c5b77d4a) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.122 92053 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.122 92053 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.123 92053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.123 92053 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.125 92053 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.130 92053 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.135 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'ef217152-08e8-40c8-a663-3565c5b77d4a'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], external_ids={}, name=ef217152-08e8-40c8-a663-3565c5b77d4a, nb_cfg_timestamp=1760003006156, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.136 92053 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f406a67caf0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.136 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.137 92053 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.137 92053 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.137 92053 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.140 92053 DEBUG oslo_service.service [-] Started child 92297 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.144 92053 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp8z_io02i/privsep.sock']#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.144 92297 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-957113'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.160 92297 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.161 92297 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.161 92297 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.163 92297 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.168 92297 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.172 92297 INFO eventlet.wsgi.server [-] (92297) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  9 09:44:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:10.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:44:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:44:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:44:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:44:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:44:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:44:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:44:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:44:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:44:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:44:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:44:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:44:10 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.691 92053 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.691 92053 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp8z_io02i/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.612 92357 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.615 92357 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.617 92357 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.617 92357 INFO oslo.privsep.daemon [-] privsep daemon running as pid 92357#033[00m
Oct  9 09:44:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:10.693 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[e3fecd34-b1c8-436b-b158-5fdcf4c45a27]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:44:10 compute-0 podman[92420]: 2025-10-09 09:44:10.962070121 +0000 UTC m=+0.028885421 container create 7d6c029586d3605ebe592019e2f36ba4a98f7495b252055be93fd6e85cd5e109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 09:44:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:10.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:10 compute-0 systemd[1]: Started libpod-conmon-7d6c029586d3605ebe592019e2f36ba4a98f7495b252055be93fd6e85cd5e109.scope.
Oct  9 09:44:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:44:11 compute-0 podman[92420]: 2025-10-09 09:44:11.013231468 +0000 UTC m=+0.080046788 container init 7d6c029586d3605ebe592019e2f36ba4a98f7495b252055be93fd6e85cd5e109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_khayyam, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:44:11 compute-0 podman[92420]: 2025-10-09 09:44:11.017903736 +0000 UTC m=+0.084719036 container start 7d6c029586d3605ebe592019e2f36ba4a98f7495b252055be93fd6e85cd5e109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:44:11 compute-0 podman[92420]: 2025-10-09 09:44:11.019094172 +0000 UTC m=+0.085909472 container attach 7d6c029586d3605ebe592019e2f36ba4a98f7495b252055be93fd6e85cd5e109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:44:11 compute-0 cool_khayyam[92433]: 167 167
Oct  9 09:44:11 compute-0 systemd[1]: libpod-7d6c029586d3605ebe592019e2f36ba4a98f7495b252055be93fd6e85cd5e109.scope: Deactivated successfully.
Oct  9 09:44:11 compute-0 conmon[92433]: conmon 7d6c029586d3605ebe59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d6c029586d3605ebe592019e2f36ba4a98f7495b252055be93fd6e85cd5e109.scope/container/memory.events
Oct  9 09:44:11 compute-0 podman[92420]: 2025-10-09 09:44:11.022017539 +0000 UTC m=+0.088832869 container died 7d6c029586d3605ebe592019e2f36ba4a98f7495b252055be93fd6e85cd5e109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7b7e39b5638b2aa04874c7a1cb35d22ead43cea7d4a5fbe5af3151212474e8-merged.mount: Deactivated successfully.
Oct  9 09:44:11 compute-0 podman[92420]: 2025-10-09 09:44:11.044024137 +0000 UTC m=+0.110839438 container remove 7d6c029586d3605ebe592019e2f36ba4a98f7495b252055be93fd6e85cd5e109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:44:11 compute-0 podman[92420]: 2025-10-09 09:44:10.949771363 +0000 UTC m=+0.016586682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:44:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:44:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:44:11 compute-0 systemd[1]: libpod-conmon-7d6c029586d3605ebe592019e2f36ba4a98f7495b252055be93fd6e85cd5e109.scope: Deactivated successfully.
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.109 92357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.110 92357 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.110 92357 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:44:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:11 compute-0 podman[92454]: 2025-10-09 09:44:11.163499471 +0000 UTC m=+0.029573110 container create 81dd186c40e54431ceb455c9434d4348a472b9f76f492547c541b1a2f21b66c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_brown, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:44:11 compute-0 systemd[1]: Started libpod-conmon-81dd186c40e54431ceb455c9434d4348a472b9f76f492547c541b1a2f21b66c6.scope.
Oct  9 09:44:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34938223d78dc19b0285b6310fb32f8916fc7f1a7892e849ca775e1047ce86f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34938223d78dc19b0285b6310fb32f8916fc7f1a7892e849ca775e1047ce86f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34938223d78dc19b0285b6310fb32f8916fc7f1a7892e849ca775e1047ce86f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34938223d78dc19b0285b6310fb32f8916fc7f1a7892e849ca775e1047ce86f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34938223d78dc19b0285b6310fb32f8916fc7f1a7892e849ca775e1047ce86f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:11 compute-0 podman[92454]: 2025-10-09 09:44:11.224522889 +0000 UTC m=+0.090596537 container init 81dd186c40e54431ceb455c9434d4348a472b9f76f492547c541b1a2f21b66c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_brown, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 09:44:11 compute-0 podman[92454]: 2025-10-09 09:44:11.2299498 +0000 UTC m=+0.096023438 container start 81dd186c40e54431ceb455c9434d4348a472b9f76f492547c541b1a2f21b66c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:44:11 compute-0 podman[92454]: 2025-10-09 09:44:11.231082598 +0000 UTC m=+0.097156236 container attach 81dd186c40e54431ceb455c9434d4348a472b9f76f492547c541b1a2f21b66c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 09:44:11 compute-0 podman[92454]: 2025-10-09 09:44:11.150777553 +0000 UTC m=+0.016851222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:44:11 compute-0 beautiful_brown[92468]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:44:11 compute-0 beautiful_brown[92468]: --> All data devices are unavailable
Oct  9 09:44:11 compute-0 systemd[1]: libpod-81dd186c40e54431ceb455c9434d4348a472b9f76f492547c541b1a2f21b66c6.scope: Deactivated successfully.
Oct  9 09:44:11 compute-0 conmon[92468]: conmon 81dd186c40e54431ceb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81dd186c40e54431ceb455c9434d4348a472b9f76f492547c541b1a2f21b66c6.scope/container/memory.events
Oct  9 09:44:11 compute-0 podman[92454]: 2025-10-09 09:44:11.497037936 +0000 UTC m=+0.363111594 container died 81dd186c40e54431ceb455c9434d4348a472b9f76f492547c541b1a2f21b66c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b34938223d78dc19b0285b6310fb32f8916fc7f1a7892e849ca775e1047ce86f-merged.mount: Deactivated successfully.
Oct  9 09:44:11 compute-0 podman[92454]: 2025-10-09 09:44:11.518333082 +0000 UTC m=+0.384406721 container remove 81dd186c40e54431ceb455c9434d4348a472b9f76f492547c541b1a2f21b66c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:44:11 compute-0 systemd[1]: libpod-conmon-81dd186c40e54431ceb455c9434d4348a472b9f76f492547c541b1a2f21b66c6.scope: Deactivated successfully.
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.576 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[c5529868-462a-45ae-9347-e9d1b1a09172]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.578 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, column=external_ids, values=({'neutron:ovn-metadata-id': '3fe49051-af5a-52d6-b91d-ba5b9ba1e88e'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.584 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.588 92053 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.588 92053 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.588 92053 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.588 92053 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.588 92053 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.588 92053 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.588 92053 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.589 92053 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.589 92053 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.589 92053 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.589 92053 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.589 92053 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.589 92053 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.589 92053 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.589 92053 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.589 92053 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.590 92053 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.590 92053 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.590 92053 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.590 92053 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.590 92053 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.590 92053 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.590 92053 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.591 92053 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.591 92053 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.591 92053 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.591 92053 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.591 92053 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.591 92053 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.591 92053 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.591 92053 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.592 92053 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.592 92053 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.592 92053 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.592 92053 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.592 92053 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.592 92053 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.593 92053 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.594 92053 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.594 92053 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.594 92053 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.594 92053 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.595 92053 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.595 92053 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.595 92053 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.595 92053 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.595 92053 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.595 92053 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.595 92053 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.595 92053 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.595 92053 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.596 92053 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.596 92053 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.596 92053 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.596 92053 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.596 92053 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.596 92053 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.596 92053 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.596 92053 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.596 92053 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.597 92053 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.597 92053 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.597 92053 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.597 92053 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.597 92053 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.597 92053 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.597 92053 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.597 92053 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.598 92053 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.598 92053 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.598 92053 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.598 92053 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.598 92053 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.598 92053 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.598 92053 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.598 92053 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.598 92053 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.599 92053 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.599 92053 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.599 92053 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.599 92053 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.599 92053 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.599 92053 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.599 92053 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.600 92053 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.600 92053 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.600 92053 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.600 92053 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.600 92053 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.600 92053 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.600 92053 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.601 92053 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.601 92053 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.601 92053 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.601 92053 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.601 92053 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.601 92053 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.601 92053 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.601 92053 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.601 92053 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.601 92053 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.602 92053 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.602 92053 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.602 92053 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.602 92053 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.602 92053 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.602 92053 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.602 92053 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.602 92053 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.602 92053 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.603 92053 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.603 92053 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.603 92053 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.603 92053 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.603 92053 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.603 92053 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.603 92053 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.603 92053 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.603 92053 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.604 92053 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.604 92053 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.604 92053 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.604 92053 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.604 92053 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.604 92053 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.604 92053 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.605 92053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.605 92053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.605 92053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.605 92053 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.605 92053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.605 92053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.606 92053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.606 92053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.606 92053 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.606 92053 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.606 92053 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.606 92053 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.606 92053 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.606 92053 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.606 92053 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.607 92053 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.607 92053 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.607 92053 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.607 92053 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.607 92053 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.607 92053 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.607 92053 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.607 92053 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.607 92053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.607 92053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.608 92053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.608 92053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.608 92053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.608 92053 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.608 92053 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.608 92053 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.608 92053 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.608 92053 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.608 92053 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.608 92053 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.609 92053 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.610 92053 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.610 92053 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.610 92053 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.610 92053 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.610 92053 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.610 92053 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.610 92053 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.610 92053 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.610 92053 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.610 92053 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.611 92053 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.611 92053 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.611 92053 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.611 92053 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.611 92053 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.611 92053 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.611 92053 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.611 92053 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.611 92053 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.612 92053 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.612 92053 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.612 92053 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.612 92053 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.612 92053 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.612 92053 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.612 92053 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.612 92053 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.612 92053 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.613 92053 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.614 92053 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.614 92053 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.614 92053 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.614 92053 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.614 92053 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.614 92053 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.614 92053 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.614 92053 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.614 92053 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.614 92053 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.615 92053 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.615 92053 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.615 92053 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.615 92053 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.615 92053 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.615 92053 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.615 92053 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.615 92053 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.615 92053 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.615 92053 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.616 92053 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.616 92053 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.616 92053 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.616 92053 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.616 92053 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.616 92053 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.616 92053 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.616 92053 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.617 92053 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.617 92053 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.617 92053 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.617 92053 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.617 92053 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.617 92053 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.617 92053 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.617 92053 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.617 92053 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.618 92053 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.618 92053 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.618 92053 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.618 92053 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.618 92053 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.618 92053 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.618 92053 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.618 92053 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.618 92053 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.618 92053 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.619 92053 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.619 92053 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.619 92053 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.619 92053 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.619 92053 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.619 92053 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.619 92053 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.619 92053 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.619 92053 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.620 92053 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.620 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.620 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.620 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.620 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.620 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.620 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.620 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.620 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.620 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.621 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.621 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.621 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.621 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.621 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.621 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.621 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.621 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.621 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.621 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.622 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.622 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.622 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.622 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.622 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.622 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.622 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.622 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.622 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.622 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.623 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.623 92053 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.623 92053 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.623 92053 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.623 92053 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.623 92053 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:44:11.623 92053 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:44:11 compute-0 podman[92574]: 2025-10-09 09:44:11.931736091 +0000 UTC m=+0.028172356 container create 7c7855c4cc9e45f575a93a1f614a8f87932319608beb1b808420cb300f18a65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:44:11 compute-0 systemd[1]: Started libpod-conmon-7c7855c4cc9e45f575a93a1f614a8f87932319608beb1b808420cb300f18a65f.scope.
Oct  9 09:44:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:44:11 compute-0 podman[92574]: 2025-10-09 09:44:11.976867339 +0000 UTC m=+0.073303614 container init 7c7855c4cc9e45f575a93a1f614a8f87932319608beb1b808420cb300f18a65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 09:44:11 compute-0 podman[92574]: 2025-10-09 09:44:11.9810171 +0000 UTC m=+0.077453355 container start 7c7855c4cc9e45f575a93a1f614a8f87932319608beb1b808420cb300f18a65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  9 09:44:11 compute-0 podman[92574]: 2025-10-09 09:44:11.982065769 +0000 UTC m=+0.078502044 container attach 7c7855c4cc9e45f575a93a1f614a8f87932319608beb1b808420cb300f18a65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:44:11 compute-0 beautiful_dijkstra[92587]: 167 167
Oct  9 09:44:11 compute-0 systemd[1]: libpod-7c7855c4cc9e45f575a93a1f614a8f87932319608beb1b808420cb300f18a65f.scope: Deactivated successfully.
Oct  9 09:44:11 compute-0 conmon[92587]: conmon 7c7855c4cc9e45f575a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c7855c4cc9e45f575a93a1f614a8f87932319608beb1b808420cb300f18a65f.scope/container/memory.events
Oct  9 09:44:11 compute-0 podman[92574]: 2025-10-09 09:44:11.984767397 +0000 UTC m=+0.081203652 container died 7c7855c4cc9e45f575a93a1f614a8f87932319608beb1b808420cb300f18a65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb3f5ed31b7afe5a83143801bb0aa3ff7504fdf228442a41a75b0fedd4cac4e-merged.mount: Deactivated successfully.
Oct  9 09:44:12 compute-0 podman[92574]: 2025-10-09 09:44:12.004574937 +0000 UTC m=+0.101011191 container remove 7c7855c4cc9e45f575a93a1f614a8f87932319608beb1b808420cb300f18a65f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dijkstra, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:44:12 compute-0 podman[92574]: 2025-10-09 09:44:11.920486361 +0000 UTC m=+0.016922637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:44:12 compute-0 systemd[1]: libpod-conmon-7c7855c4cc9e45f575a93a1f614a8f87932319608beb1b808420cb300f18a65f.scope: Deactivated successfully.
Oct  9 09:44:12 compute-0 podman[92610]: 2025-10-09 09:44:12.124729061 +0000 UTC m=+0.027227172 container create ebd6df6d13f6a5ecf831aac5def9643dd328064c411d369fe42c67c91b18b393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:44:12 compute-0 systemd[1]: Started libpod-conmon-ebd6df6d13f6a5ecf831aac5def9643dd328064c411d369fe42c67c91b18b393.scope.
Oct  9 09:44:12 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca69b11463ca58153d756938ed512384202637031d62c5a72b92c4bab02acf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca69b11463ca58153d756938ed512384202637031d62c5a72b92c4bab02acf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca69b11463ca58153d756938ed512384202637031d62c5a72b92c4bab02acf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca69b11463ca58153d756938ed512384202637031d62c5a72b92c4bab02acf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:12 compute-0 podman[92610]: 2025-10-09 09:44:12.181161696 +0000 UTC m=+0.083659806 container init ebd6df6d13f6a5ecf831aac5def9643dd328064c411d369fe42c67c91b18b393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:44:12 compute-0 podman[92610]: 2025-10-09 09:44:12.189119533 +0000 UTC m=+0.091617644 container start ebd6df6d13f6a5ecf831aac5def9643dd328064c411d369fe42c67c91b18b393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:44:12 compute-0 podman[92610]: 2025-10-09 09:44:12.190258342 +0000 UTC m=+0.092756452 container attach ebd6df6d13f6a5ecf831aac5def9643dd328064c411d369fe42c67c91b18b393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:44:12 compute-0 podman[92610]: 2025-10-09 09:44:12.114117416 +0000 UTC m=+0.016615547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:44:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:12] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:44:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:12] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:44:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:12.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]: {
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:    "1": [
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:        {
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "devices": [
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "/dev/loop3"
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            ],
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "lv_name": "ceph_lv0",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "lv_size": "21470642176",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "name": "ceph_lv0",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "tags": {
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.cluster_name": "ceph",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.crush_device_class": "",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.encrypted": "0",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.osd_id": "1",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.type": "block",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.vdo": "0",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:                "ceph.with_tpm": "0"
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            },
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "type": "block",
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:            "vg_name": "ceph_vg0"
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:        }
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]:    ]
Oct  9 09:44:12 compute-0 vigilant_davinci[92623]: }
Oct  9 09:44:12 compute-0 systemd[1]: libpod-ebd6df6d13f6a5ecf831aac5def9643dd328064c411d369fe42c67c91b18b393.scope: Deactivated successfully.
Oct  9 09:44:12 compute-0 podman[92610]: 2025-10-09 09:44:12.430556792 +0000 UTC m=+0.333054913 container died ebd6df6d13f6a5ecf831aac5def9643dd328064c411d369fe42c67c91b18b393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ca69b11463ca58153d756938ed512384202637031d62c5a72b92c4bab02acf1-merged.mount: Deactivated successfully.
Oct  9 09:44:12 compute-0 podman[92610]: 2025-10-09 09:44:12.455229733 +0000 UTC m=+0.357727844 container remove ebd6df6d13f6a5ecf831aac5def9643dd328064c411d369fe42c67c91b18b393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_davinci, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 09:44:12 compute-0 systemd[1]: libpod-conmon-ebd6df6d13f6a5ecf831aac5def9643dd328064c411d369fe42c67c91b18b393.scope: Deactivated successfully.
Oct  9 09:44:12 compute-0 podman[92722]: 2025-10-09 09:44:12.864318239 +0000 UTC m=+0.029351110 container create c7b946c9bee7dec72aea426c454cd2371835dc6f165dd91de8f0ac826e4c7e04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:44:12 compute-0 systemd[1]: Started libpod-conmon-c7b946c9bee7dec72aea426c454cd2371835dc6f165dd91de8f0ac826e4c7e04.scope.
Oct  9 09:44:12 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:44:12 compute-0 podman[92722]: 2025-10-09 09:44:12.911367087 +0000 UTC m=+0.076399948 container init c7b946c9bee7dec72aea426c454cd2371835dc6f165dd91de8f0ac826e4c7e04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_meitner, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:44:12 compute-0 podman[92722]: 2025-10-09 09:44:12.915399556 +0000 UTC m=+0.080432417 container start c7b946c9bee7dec72aea426c454cd2371835dc6f165dd91de8f0ac826e4c7e04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 09:44:12 compute-0 podman[92722]: 2025-10-09 09:44:12.916998795 +0000 UTC m=+0.082031656 container attach c7b946c9bee7dec72aea426c454cd2371835dc6f165dd91de8f0ac826e4c7e04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_meitner, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 09:44:12 compute-0 fervent_meitner[92734]: 167 167
Oct  9 09:44:12 compute-0 podman[92722]: 2025-10-09 09:44:12.918547046 +0000 UTC m=+0.083579907 container died c7b946c9bee7dec72aea426c454cd2371835dc6f165dd91de8f0ac826e4c7e04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_meitner, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:44:12 compute-0 systemd[1]: libpod-c7b946c9bee7dec72aea426c454cd2371835dc6f165dd91de8f0ac826e4c7e04.scope: Deactivated successfully.
Oct  9 09:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ee930cd3f297a75dc238abedc4c7f6becfdac4d3cf8dcc75b9bd39b936ac0c6-merged.mount: Deactivated successfully.
Oct  9 09:44:12 compute-0 podman[92722]: 2025-10-09 09:44:12.938719053 +0000 UTC m=+0.103751914 container remove c7b946c9bee7dec72aea426c454cd2371835dc6f165dd91de8f0ac826e4c7e04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:44:12 compute-0 podman[92722]: 2025-10-09 09:44:12.85268653 +0000 UTC m=+0.017719391 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:44:12 compute-0 systemd[1]: libpod-conmon-c7b946c9bee7dec72aea426c454cd2371835dc6f165dd91de8f0ac826e4c7e04.scope: Deactivated successfully.
Oct  9 09:44:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:12.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:13 compute-0 podman[92756]: 2025-10-09 09:44:13.059535557 +0000 UTC m=+0.029478440 container create 9e71addf13aafc40abfc6714315f481436d78d036daa69d86fc12159cdf32eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:44:13 compute-0 systemd[1]: Started libpod-conmon-9e71addf13aafc40abfc6714315f481436d78d036daa69d86fc12159cdf32eb7.scope.
Oct  9 09:44:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da017df5beba2ec99f1b935608c660e012560c44fef646d209c1cd4e3aefdf65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da017df5beba2ec99f1b935608c660e012560c44fef646d209c1cd4e3aefdf65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da017df5beba2ec99f1b935608c660e012560c44fef646d209c1cd4e3aefdf65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da017df5beba2ec99f1b935608c660e012560c44fef646d209c1cd4e3aefdf65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:13 compute-0 podman[92756]: 2025-10-09 09:44:13.114817651 +0000 UTC m=+0.084760533 container init 9e71addf13aafc40abfc6714315f481436d78d036daa69d86fc12159cdf32eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_elbakyan, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:44:13 compute-0 podman[92756]: 2025-10-09 09:44:13.120409062 +0000 UTC m=+0.090351945 container start 9e71addf13aafc40abfc6714315f481436d78d036daa69d86fc12159cdf32eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:44:13 compute-0 podman[92756]: 2025-10-09 09:44:13.121624406 +0000 UTC m=+0.091567289 container attach 9e71addf13aafc40abfc6714315f481436d78d036daa69d86fc12159cdf32eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_elbakyan, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:44:13 compute-0 podman[92756]: 2025-10-09 09:44:13.047782477 +0000 UTC m=+0.017725371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:44:13 compute-0 relaxed_elbakyan[92769]: {}
Oct  9 09:44:13 compute-0 lvm[92846]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:44:13 compute-0 lvm[92846]: VG ceph_vg0 finished
Oct  9 09:44:13 compute-0 systemd[1]: libpod-9e71addf13aafc40abfc6714315f481436d78d036daa69d86fc12159cdf32eb7.scope: Deactivated successfully.
Oct  9 09:44:13 compute-0 podman[92756]: 2025-10-09 09:44:13.61152243 +0000 UTC m=+0.581465312 container died 9e71addf13aafc40abfc6714315f481436d78d036daa69d86fc12159cdf32eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 09:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-da017df5beba2ec99f1b935608c660e012560c44fef646d209c1cd4e3aefdf65-merged.mount: Deactivated successfully.
Oct  9 09:44:13 compute-0 podman[92756]: 2025-10-09 09:44:13.638551005 +0000 UTC m=+0.608493889 container remove 9e71addf13aafc40abfc6714315f481436d78d036daa69d86fc12159cdf32eb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:44:13 compute-0 systemd[1]: libpod-conmon-9e71addf13aafc40abfc6714315f481436d78d036daa69d86fc12159cdf32eb7.scope: Deactivated successfully.
Oct  9 09:44:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:44:13 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:44:13 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:14 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:14 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:14.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:14 compute-0 systemd-logind[798]: New session 37 of user zuul.
Oct  9 09:44:14 compute-0 systemd[1]: Started Session 37 of User zuul.
Oct  9 09:44:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:14.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:15 compute-0 python3.9[93037]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:44:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:16 compute-0 python3.9[93194]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:16.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:44:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:16.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:16.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:16.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:16.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:16.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:17 compute-0 python3.9[93356]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:44:17 compute-0 systemd[1]: Reloading.
Oct  9 09:44:17 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:17 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:18 compute-0 python3.9[93542]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:44:18 compute-0 network[93559]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:44:18 compute-0 network[93560]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:44:18 compute-0 network[93561]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:44:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:18.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:44:18 compute-0 podman[93568]: 2025-10-09 09:44:18.838804307 +0000 UTC m=+0.092501262 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  9 09:44:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:18.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:44:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:44:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:44:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:44:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:44:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:44:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:44:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:44:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:20.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:44:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:20.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:21 compute-0 python3.9[93852]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:22] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:44:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:22] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:44:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:22.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1 op/s
Oct  9 09:44:22 compute-0 python3.9[94006]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:23 compute-0 python3.9[94160]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:23 compute-0 python3.9[94313]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:24 compute-0 python3.9[94466]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:24.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:44:24 compute-0 python3.9[94620]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:24.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:25 compute-0 python3.9[94774]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:26 compute-0 python3.9[94928]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:26.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1 op/s
Oct  9 09:44:26 compute-0 python3.9[95081]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:26.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:26.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:26.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:26.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:26.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:27 compute-0 python3.9[95233]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:27 compute-0 python3.9[95385]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:27 compute-0 python3.9[95537]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:28.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:44:28 compute-0 python3.9[95690]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:28 compute-0 python3.9[95843]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:28.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:29 compute-0 python3.9[96019]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:29 compute-0 python3.9[96172]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:30.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:44:30 compute-0 python3.9[96325]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:30 compute-0 python3.9[96478]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:30.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:31 compute-0 python3.9[96630]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:31 compute-0 python3.9[96782]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:32] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:44:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:32] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Oct  9 09:44:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:32.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:32 compute-0 python3.9[96935]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:44:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:32.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:33 compute-0 python3.9[97088]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:33 compute-0 python3.9[97240]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:44:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:34.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:34 compute-0 python3.9[97393]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:44:34 compute-0 systemd[1]: Reloading.
Oct  9 09:44:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:34 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:34 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:44:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:44:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:34.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:35 compute-0 python3.9[97581]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:35 compute-0 python3.9[97734]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:35 compute-0 python3.9[97887]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:36.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:44:36 compute-0 python3.9[98041]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:36 compute-0 python3.9[98195]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:36.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:36.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:36.989Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:36.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:37.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:37 compute-0 python3.9[98348]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:37 compute-0 python3.9[98501]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:38.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:38 compute-0 podman[98581]: 2025-10-09 09:44:38.606499597 +0000 UTC m=+0.045008336 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:44:38 compute-0 python3.9[98672]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  9 09:44:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:39.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:39 compute-0 python3.9[98825]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 09:44:39 compute-0 rsyslogd[1243]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:44:39 compute-0 rsyslogd[1243]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:44:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:40 compute-0 python3.9[98985]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 09:44:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:40.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:41.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:41 compute-0 python3.9[99146]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:44:41 compute-0 python3.9[99230]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:44:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:42] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:44:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:42] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:44:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:42.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:44:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:43.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:44.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:45.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:46.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:44:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:46.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:46.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:46.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:46.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:47.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:48.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:49.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:49 compute-0 podman[99337]: 2025-10-09 09:44:49.41392921 +0000 UTC m=+0.085664800 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller)
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:44:49
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'images', '.nfs', 'default.rgw.control', '.rgw.root']
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:44:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:44:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:44:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:44:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:50.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:44:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6850 writes, 28K keys, 6850 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6850 writes, 1264 syncs, 5.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6850 writes, 28K keys, 6850 commit groups, 1.0 writes per commit group, ingest: 20.01 MB, 0.03 MB/s#012Interval WAL: 6850 writes, 1264 syncs, 5.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  9 09:44:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:51.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:52] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:44:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:44:52] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:44:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:52.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:44:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:53.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:54.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:55.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:56.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:44:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:56.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:56.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:56.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:44:56.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:44:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:44:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:58.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:44:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:44:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:44:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:59.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:44:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:45:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:01.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:02] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:45:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:02] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Oct  9 09:45:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:02.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:02 compute-0 kernel: SELinux:  Converting 483 SID table entries...
Oct  9 09:45:02 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:45:02 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:45:02 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:45:02 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:45:02 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:45:02 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:45:02 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:45:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:45:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:03.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:45:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:04.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:45:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:45:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:05.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:06.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:06.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:06.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:06.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:06.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:45:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:45:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:08.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:09.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:09 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Oct  9 09:45:09 compute-0 podman[99531]: 2025-10-09 09:45:09.435633507 +0000 UTC m=+0.040154880 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct  9 09:45:09 compute-0 kernel: SELinux:  Converting 483 SID table entries...
Oct  9 09:45:09 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:45:09 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:45:09 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:45:09 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:45:09 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:45:09 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:45:09 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:45:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:45:10.095 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:45:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:45:10.096 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:45:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:45:10.096 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:45:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:10.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:11.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:12] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:45:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:12] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:45:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:12.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:13.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:13 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:45:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:45:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:45:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:15.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:15 compute-0 podman[99725]: 2025-10-09 09:45:15.124594752 +0000 UTC m=+0.029616167 container create 10bb86711c3075a9fd5310f2435e540892df5542f0026c0dfb88e0edb498514d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:45:15 compute-0 systemd[1]: Started libpod-conmon-10bb86711c3075a9fd5310f2435e540892df5542f0026c0dfb88e0edb498514d.scope.
Oct  9 09:45:15 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:45:15 compute-0 podman[99725]: 2025-10-09 09:45:15.189989775 +0000 UTC m=+0.095011200 container init 10bb86711c3075a9fd5310f2435e540892df5542f0026c0dfb88e0edb498514d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_banzai, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:45:15 compute-0 podman[99725]: 2025-10-09 09:45:15.195148832 +0000 UTC m=+0.100170248 container start 10bb86711c3075a9fd5310f2435e540892df5542f0026c0dfb88e0edb498514d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_banzai, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 09:45:15 compute-0 podman[99725]: 2025-10-09 09:45:15.196193162 +0000 UTC m=+0.101214577 container attach 10bb86711c3075a9fd5310f2435e540892df5542f0026c0dfb88e0edb498514d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:45:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:45:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:45:15 compute-0 silly_banzai[99738]: 167 167
Oct  9 09:45:15 compute-0 systemd[1]: libpod-10bb86711c3075a9fd5310f2435e540892df5542f0026c0dfb88e0edb498514d.scope: Deactivated successfully.
Oct  9 09:45:15 compute-0 conmon[99738]: conmon 10bb86711c3075a9fd53 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-10bb86711c3075a9fd5310f2435e540892df5542f0026c0dfb88e0edb498514d.scope/container/memory.events
Oct  9 09:45:15 compute-0 podman[99725]: 2025-10-09 09:45:15.200062198 +0000 UTC m=+0.105083623 container died 10bb86711c3075a9fd5310f2435e540892df5542f0026c0dfb88e0edb498514d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:45:15 compute-0 podman[99725]: 2025-10-09 09:45:15.111926353 +0000 UTC m=+0.016947789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-01c2ee4d0dea300acaa60c6a3b2c558f9fefa5f26508ccef8c86ec7c2c06c4f7-merged.mount: Deactivated successfully.
Oct  9 09:45:15 compute-0 podman[99725]: 2025-10-09 09:45:15.224128268 +0000 UTC m=+0.129149684 container remove 10bb86711c3075a9fd5310f2435e540892df5542f0026c0dfb88e0edb498514d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 09:45:15 compute-0 systemd[1]: libpod-conmon-10bb86711c3075a9fd5310f2435e540892df5542f0026c0dfb88e0edb498514d.scope: Deactivated successfully.
Oct  9 09:45:15 compute-0 podman[99760]: 2025-10-09 09:45:15.345173297 +0000 UTC m=+0.029385542 container create 828747678bfeffc0073026eaa951c518a091c56f630137b78559cea247821c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:45:15 compute-0 systemd[1]: Started libpod-conmon-828747678bfeffc0073026eaa951c518a091c56f630137b78559cea247821c01.scope.
Oct  9 09:45:15 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf79c3c2288bbbb0ba1015680991f306ba2c04d7cce8ef5869e765d4558d5d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf79c3c2288bbbb0ba1015680991f306ba2c04d7cce8ef5869e765d4558d5d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf79c3c2288bbbb0ba1015680991f306ba2c04d7cce8ef5869e765d4558d5d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf79c3c2288bbbb0ba1015680991f306ba2c04d7cce8ef5869e765d4558d5d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf79c3c2288bbbb0ba1015680991f306ba2c04d7cce8ef5869e765d4558d5d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:15 compute-0 podman[99760]: 2025-10-09 09:45:15.427909189 +0000 UTC m=+0.112121433 container init 828747678bfeffc0073026eaa951c518a091c56f630137b78559cea247821c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:45:15 compute-0 podman[99760]: 2025-10-09 09:45:15.333260554 +0000 UTC m=+0.017472819 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:45:15 compute-0 podman[99760]: 2025-10-09 09:45:15.433082543 +0000 UTC m=+0.117294788 container start 828747678bfeffc0073026eaa951c518a091c56f630137b78559cea247821c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_neumann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 09:45:15 compute-0 podman[99760]: 2025-10-09 09:45:15.438122558 +0000 UTC m=+0.122334802 container attach 828747678bfeffc0073026eaa951c518a091c56f630137b78559cea247821c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_neumann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:45:15 compute-0 musing_neumann[99773]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:45:15 compute-0 musing_neumann[99773]: --> All data devices are unavailable
Oct  9 09:45:15 compute-0 systemd[1]: libpod-828747678bfeffc0073026eaa951c518a091c56f630137b78559cea247821c01.scope: Deactivated successfully.
Oct  9 09:45:15 compute-0 podman[99760]: 2025-10-09 09:45:15.695023375 +0000 UTC m=+0.379235620 container died 828747678bfeffc0073026eaa951c518a091c56f630137b78559cea247821c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_neumann, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 09:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcf79c3c2288bbbb0ba1015680991f306ba2c04d7cce8ef5869e765d4558d5d4-merged.mount: Deactivated successfully.
Oct  9 09:45:15 compute-0 podman[99760]: 2025-10-09 09:45:15.720580026 +0000 UTC m=+0.404792270 container remove 828747678bfeffc0073026eaa951c518a091c56f630137b78559cea247821c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_neumann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 09:45:15 compute-0 systemd[1]: libpod-conmon-828747678bfeffc0073026eaa951c518a091c56f630137b78559cea247821c01.scope: Deactivated successfully.
Oct  9 09:45:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:16 compute-0 podman[99881]: 2025-10-09 09:45:16.148216463 +0000 UTC m=+0.029572004 container create a196f819d2ee3347d683272a332e75fc67dcf47f14d4d25322e126ebed88f34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hamilton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 09:45:16 compute-0 systemd[1]: Started libpod-conmon-a196f819d2ee3347d683272a332e75fc67dcf47f14d4d25322e126ebed88f34f.scope.
Oct  9 09:45:16 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:45:16 compute-0 podman[99881]: 2025-10-09 09:45:16.198485913 +0000 UTC m=+0.079841464 container init a196f819d2ee3347d683272a332e75fc67dcf47f14d4d25322e126ebed88f34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hamilton, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:45:16 compute-0 podman[99881]: 2025-10-09 09:45:16.20468883 +0000 UTC m=+0.086044371 container start a196f819d2ee3347d683272a332e75fc67dcf47f14d4d25322e126ebed88f34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hamilton, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:45:16 compute-0 podman[99881]: 2025-10-09 09:45:16.205992038 +0000 UTC m=+0.087347589 container attach a196f819d2ee3347d683272a332e75fc67dcf47f14d4d25322e126ebed88f34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:45:16 compute-0 wizardly_hamilton[99894]: 167 167
Oct  9 09:45:16 compute-0 systemd[1]: libpod-a196f819d2ee3347d683272a332e75fc67dcf47f14d4d25322e126ebed88f34f.scope: Deactivated successfully.
Oct  9 09:45:16 compute-0 conmon[99894]: conmon a196f819d2ee3347d683 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a196f819d2ee3347d683272a332e75fc67dcf47f14d4d25322e126ebed88f34f.scope/container/memory.events
Oct  9 09:45:16 compute-0 podman[99881]: 2025-10-09 09:45:16.209737239 +0000 UTC m=+0.091092770 container died a196f819d2ee3347d683272a332e75fc67dcf47f14d4d25322e126ebed88f34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7726fac83be95ecb4951740cc3de655dfddc9b5dd9b77e77fc2d53b8dbabdd51-merged.mount: Deactivated successfully.
Oct  9 09:45:16 compute-0 podman[99881]: 2025-10-09 09:45:16.229663984 +0000 UTC m=+0.111019525 container remove a196f819d2ee3347d683272a332e75fc67dcf47f14d4d25322e126ebed88f34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hamilton, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:45:16 compute-0 podman[99881]: 2025-10-09 09:45:16.135542033 +0000 UTC m=+0.016897594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:45:16 compute-0 systemd[1]: libpod-conmon-a196f819d2ee3347d683272a332e75fc67dcf47f14d4d25322e126ebed88f34f.scope: Deactivated successfully.
Oct  9 09:45:16 compute-0 podman[99916]: 2025-10-09 09:45:16.352237446 +0000 UTC m=+0.030351162 container create 27e7ee775c93c3baa7911f18e593e61ee192e3418435d4ea49721fa006818c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_bhabha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 09:45:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:16 compute-0 systemd[1]: Started libpod-conmon-27e7ee775c93c3baa7911f18e593e61ee192e3418435d4ea49721fa006818c87.scope.
Oct  9 09:45:16 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:45:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e12ecbb9a2ca49de45cca53c85ac1c2f839bde2667f78a9b2e4e09a11d14965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e12ecbb9a2ca49de45cca53c85ac1c2f839bde2667f78a9b2e4e09a11d14965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e12ecbb9a2ca49de45cca53c85ac1c2f839bde2667f78a9b2e4e09a11d14965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e12ecbb9a2ca49de45cca53c85ac1c2f839bde2667f78a9b2e4e09a11d14965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:16 compute-0 podman[99916]: 2025-10-09 09:45:16.419851633 +0000 UTC m=+0.097965370 container init 27e7ee775c93c3baa7911f18e593e61ee192e3418435d4ea49721fa006818c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:45:16 compute-0 podman[99916]: 2025-10-09 09:45:16.425319724 +0000 UTC m=+0.103433431 container start 27e7ee775c93c3baa7911f18e593e61ee192e3418435d4ea49721fa006818c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_bhabha, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:45:16 compute-0 podman[99916]: 2025-10-09 09:45:16.429135811 +0000 UTC m=+0.107249538 container attach 27e7ee775c93c3baa7911f18e593e61ee192e3418435d4ea49721fa006818c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_bhabha, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:45:16 compute-0 podman[99916]: 2025-10-09 09:45:16.33945917 +0000 UTC m=+0.017572907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]: {
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:    "1": [
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:        {
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "devices": [
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "/dev/loop3"
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            ],
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "lv_name": "ceph_lv0",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "lv_size": "21470642176",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "name": "ceph_lv0",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "tags": {
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.cluster_name": "ceph",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.crush_device_class": "",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.encrypted": "0",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.osd_id": "1",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.type": "block",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.vdo": "0",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:                "ceph.with_tpm": "0"
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            },
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "type": "block",
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:            "vg_name": "ceph_vg0"
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:        }
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]:    ]
Oct  9 09:45:16 compute-0 inspiring_bhabha[99930]: }
Oct  9 09:45:16 compute-0 systemd[1]: libpod-27e7ee775c93c3baa7911f18e593e61ee192e3418435d4ea49721fa006818c87.scope: Deactivated successfully.
Oct  9 09:45:16 compute-0 podman[99916]: 2025-10-09 09:45:16.669239093 +0000 UTC m=+0.347352810 container died 27e7ee775c93c3baa7911f18e593e61ee192e3418435d4ea49721fa006818c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_bhabha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  9 09:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e12ecbb9a2ca49de45cca53c85ac1c2f839bde2667f78a9b2e4e09a11d14965-merged.mount: Deactivated successfully.
Oct  9 09:45:16 compute-0 podman[99916]: 2025-10-09 09:45:16.696080106 +0000 UTC m=+0.374193823 container remove 27e7ee775c93c3baa7911f18e593e61ee192e3418435d4ea49721fa006818c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:45:16 compute-0 systemd[1]: libpod-conmon-27e7ee775c93c3baa7911f18e593e61ee192e3418435d4ea49721fa006818c87.scope: Deactivated successfully.
Oct  9 09:45:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:16.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:16.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:16.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:16.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:17.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:17 compute-0 podman[100030]: 2025-10-09 09:45:17.121256072 +0000 UTC m=+0.030557210 container create 7c3cd894aa54a37058caf1666ffe158f9cfe2d2485cad8551ab0b57c5a4a9038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:45:17 compute-0 systemd[1]: Started libpod-conmon-7c3cd894aa54a37058caf1666ffe158f9cfe2d2485cad8551ab0b57c5a4a9038.scope.
Oct  9 09:45:17 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:45:17 compute-0 podman[100030]: 2025-10-09 09:45:17.177705005 +0000 UTC m=+0.087006164 container init 7c3cd894aa54a37058caf1666ffe158f9cfe2d2485cad8551ab0b57c5a4a9038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:45:17 compute-0 podman[100030]: 2025-10-09 09:45:17.182681649 +0000 UTC m=+0.091982788 container start 7c3cd894aa54a37058caf1666ffe158f9cfe2d2485cad8551ab0b57c5a4a9038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 09:45:17 compute-0 adoring_elion[100043]: 167 167
Oct  9 09:45:17 compute-0 systemd[1]: libpod-7c3cd894aa54a37058caf1666ffe158f9cfe2d2485cad8551ab0b57c5a4a9038.scope: Deactivated successfully.
Oct  9 09:45:17 compute-0 podman[100030]: 2025-10-09 09:45:17.187621544 +0000 UTC m=+0.096922703 container attach 7c3cd894aa54a37058caf1666ffe158f9cfe2d2485cad8551ab0b57c5a4a9038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:45:17 compute-0 podman[100030]: 2025-10-09 09:45:17.187908285 +0000 UTC m=+0.097209454 container died 7c3cd894aa54a37058caf1666ffe158f9cfe2d2485cad8551ab0b57c5a4a9038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa23da9b631ac5c2bc5499a903916a4b90ad5aa2b1dcfd8cd5d6b47644198b06-merged.mount: Deactivated successfully.
Oct  9 09:45:17 compute-0 podman[100030]: 2025-10-09 09:45:17.107293123 +0000 UTC m=+0.016594282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:45:17 compute-0 podman[100030]: 2025-10-09 09:45:17.206021832 +0000 UTC m=+0.115322970 container remove 7c3cd894aa54a37058caf1666ffe158f9cfe2d2485cad8551ab0b57c5a4a9038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_elion, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:45:17 compute-0 systemd[1]: libpod-conmon-7c3cd894aa54a37058caf1666ffe158f9cfe2d2485cad8551ab0b57c5a4a9038.scope: Deactivated successfully.
Oct  9 09:45:17 compute-0 podman[100065]: 2025-10-09 09:45:17.330192665 +0000 UTC m=+0.029364542 container create b8c8f2b09d7304d741d70ca65439a0d6ef9a175fe48ed23673e13f7472b1c874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:45:17 compute-0 systemd[1]: Started libpod-conmon-b8c8f2b09d7304d741d70ca65439a0d6ef9a175fe48ed23673e13f7472b1c874.scope.
Oct  9 09:45:17 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad9ec0567625a6e0a753fe8c563f04fa5ad2293e7b328a1aafbc9c207444b2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad9ec0567625a6e0a753fe8c563f04fa5ad2293e7b328a1aafbc9c207444b2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad9ec0567625a6e0a753fe8c563f04fa5ad2293e7b328a1aafbc9c207444b2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad9ec0567625a6e0a753fe8c563f04fa5ad2293e7b328a1aafbc9c207444b2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:45:17 compute-0 podman[100065]: 2025-10-09 09:45:17.38832884 +0000 UTC m=+0.087500707 container init b8c8f2b09d7304d741d70ca65439a0d6ef9a175fe48ed23673e13f7472b1c874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 09:45:17 compute-0 podman[100065]: 2025-10-09 09:45:17.393785379 +0000 UTC m=+0.092957247 container start b8c8f2b09d7304d741d70ca65439a0d6ef9a175fe48ed23673e13f7472b1c874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 09:45:17 compute-0 podman[100065]: 2025-10-09 09:45:17.394952109 +0000 UTC m=+0.094123976 container attach b8c8f2b09d7304d741d70ca65439a0d6ef9a175fe48ed23673e13f7472b1c874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:45:17 compute-0 podman[100065]: 2025-10-09 09:45:17.317940101 +0000 UTC m=+0.017111989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:45:17 compute-0 lvm[100155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:45:17 compute-0 lvm[100155]: VG ceph_vg0 finished
Oct  9 09:45:17 compute-0 exciting_swirles[100078]: {}
Oct  9 09:45:17 compute-0 systemd[1]: libpod-b8c8f2b09d7304d741d70ca65439a0d6ef9a175fe48ed23673e13f7472b1c874.scope: Deactivated successfully.
Oct  9 09:45:17 compute-0 podman[100065]: 2025-10-09 09:45:17.931156898 +0000 UTC m=+0.630328765 container died b8c8f2b09d7304d741d70ca65439a0d6ef9a175fe48ed23673e13f7472b1c874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-dad9ec0567625a6e0a753fe8c563f04fa5ad2293e7b328a1aafbc9c207444b2b-merged.mount: Deactivated successfully.
Oct  9 09:45:17 compute-0 podman[100065]: 2025-10-09 09:45:17.954219637 +0000 UTC m=+0.653391503 container remove b8c8f2b09d7304d741d70ca65439a0d6ef9a175fe48ed23673e13f7472b1c874 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:45:17 compute-0 systemd[1]: libpod-conmon-b8c8f2b09d7304d741d70ca65439a0d6ef9a175fe48ed23673e13f7472b1c874.scope: Deactivated successfully.
Oct  9 09:45:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:45:17 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:45:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:18 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:18.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:19 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:19 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:19.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:45:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:45:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:45:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:45:19 compute-0 podman[100393]: 2025-10-09 09:45:19.620786671 +0000 UTC m=+0.067574764 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct  9 09:45:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:45:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:45:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:45:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:45:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:21.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:22] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:45:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:22] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:45:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:22.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:23.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:24.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:25.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:26.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:26.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:27.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:28.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:30.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:31.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:32] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:45:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:32] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:45:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:32.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:33.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:34.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:45:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:45:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:35.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:36.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:36.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:36.995Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:36.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:36.996Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:37.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:38.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:39.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:39 compute-0 podman[117011]: 2025-10-09 09:45:39.597692131 +0000 UTC m=+0.039147800 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:45:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:40.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:41.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:42] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Oct  9 09:45:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:42] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Oct  9 09:45:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:42.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:43.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:44.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:45.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:45 compute-0 kernel: SELinux:  Converting 484 SID table entries...
Oct  9 09:45:45 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:45:45 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:45:45 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:45:45 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:45:45 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:45:45 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:45:45 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:45:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:46 compute-0 dbus-broker-launch[789]: Noticed file-system modification, trigger reload.
Oct  9 09:45:46 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=5 res=1
Oct  9 09:45:46 compute-0 dbus-broker-launch[789]: Noticed file-system modification, trigger reload.
Oct  9 09:45:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:46.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:46.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:46.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:46.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:46.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:47.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:48.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:49.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:45:49
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'vms', 'images', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.log']
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:45:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:45:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:45:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:45:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:50.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:50 compute-0 podman[117317]: 2025-10-09 09:45:50.638270274 +0000 UTC m=+0.076234051 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 09:45:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:51.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:51 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct  9 09:45:51 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct  9 09:45:51 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct  9 09:45:51 compute-0 systemd[1]: sshd.service: Consumed 876ms CPU time, read 2.7M from disk, written 0B to disk.
Oct  9 09:45:51 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct  9 09:45:51 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct  9 09:45:51 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:45:51 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:45:51 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:45:51 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct  9 09:45:51 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct  9 09:45:51 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct  9 09:45:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:52] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  9 09:45:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:45:52] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  9 09:45:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:52.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:53 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 09:45:53 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct  9 09:45:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:53.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:53 compute-0 systemd[1]: Reloading.
Oct  9 09:45:53 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:45:53 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:45:53 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 09:45:53 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct  9 09:45:53 compute-0 systemd[1]: Started PackageKit Daemon.
Oct  9 09:45:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:54.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:45:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:55.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:45:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:45:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:56.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:56.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:56.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:56.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:45:56.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:45:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:57.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:45:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:58 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 09:45:58 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct  9 09:45:58 compute-0 systemd[1]: man-db-cache-update.service: Consumed 6.748s CPU time.
Oct  9 09:45:58 compute-0 systemd[1]: run-rbd1f19312f404ec495dd334cfe53aa92.service: Deactivated successfully.
Oct  9 09:45:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:45:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:58.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:45:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:59.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:45:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:46:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:00.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:01.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:02] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  9 09:46:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:02] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Oct  9 09:46:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:46:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:02.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:03.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:04.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:46:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:46:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:05.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:46:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:06.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:06.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:06.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:06.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:06.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:07.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:08.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:09.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:46:10.097 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:46:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:46:10.097 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:46:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:46:10.097 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:46:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:10.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:10 compute-0 podman[126597]: 2025-10-09 09:46:10.591406833 +0000 UTC m=+0.033380344 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  9 09:46:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:11.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:12] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:46:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:12] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Oct  9 09:46:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:46:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:12.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:13.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:14.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:15.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:46:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:16.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:16.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:16.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:16.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:16.999Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:17.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:18.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:46:18 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:46:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:46:18 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:46:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:46:18 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:46:18 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:46:18 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:46:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:46:18 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:46:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:46:18 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:46:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:19.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:19 compute-0 podman[126781]: 2025-10-09 09:46:19.19704853 +0000 UTC m=+0.036877856 container create 3a65cbe2cb361b240f0f54124da80802f06bdda6c5a4495a2c4e64b6ab1dd729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dewdney, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:46:19 compute-0 systemd[1]: Started libpod-conmon-3a65cbe2cb361b240f0f54124da80802f06bdda6c5a4495a2c4e64b6ab1dd729.scope.
Oct  9 09:46:19 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:46:19 compute-0 podman[126781]: 2025-10-09 09:46:19.261015961 +0000 UTC m=+0.100845296 container init 3a65cbe2cb361b240f0f54124da80802f06bdda6c5a4495a2c4e64b6ab1dd729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dewdney, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:46:19 compute-0 podman[126781]: 2025-10-09 09:46:19.265907496 +0000 UTC m=+0.105736831 container start 3a65cbe2cb361b240f0f54124da80802f06bdda6c5a4495a2c4e64b6ab1dd729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:46:19 compute-0 podman[126781]: 2025-10-09 09:46:19.267107841 +0000 UTC m=+0.106937177 container attach 3a65cbe2cb361b240f0f54124da80802f06bdda6c5a4495a2c4e64b6ab1dd729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dewdney, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:46:19 compute-0 silly_dewdney[126794]: 167 167
Oct  9 09:46:19 compute-0 systemd[1]: libpod-3a65cbe2cb361b240f0f54124da80802f06bdda6c5a4495a2c4e64b6ab1dd729.scope: Deactivated successfully.
Oct  9 09:46:19 compute-0 podman[126781]: 2025-10-09 09:46:19.272215582 +0000 UTC m=+0.112044918 container died 3a65cbe2cb361b240f0f54124da80802f06bdda6c5a4495a2c4e64b6ab1dd729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dewdney, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:46:19 compute-0 podman[126781]: 2025-10-09 09:46:19.180872175 +0000 UTC m=+0.020701520 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e506efdc5d86b0c444a1126b066e209827e8e66d5c63c7f6fb62632b2fae043-merged.mount: Deactivated successfully.
Oct  9 09:46:19 compute-0 podman[126781]: 2025-10-09 09:46:19.308921782 +0000 UTC m=+0.148751118 container remove 3a65cbe2cb361b240f0f54124da80802f06bdda6c5a4495a2c4e64b6ab1dd729 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_dewdney, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 09:46:19 compute-0 systemd[1]: libpod-conmon-3a65cbe2cb361b240f0f54124da80802f06bdda6c5a4495a2c4e64b6ab1dd729.scope: Deactivated successfully.
Oct  9 09:46:19 compute-0 podman[126817]: 2025-10-09 09:46:19.447604583 +0000 UTC m=+0.037647378 container create 6a6c07429716e06262be07a46ae963cca5cfa08447d3f5dc255238d3d0383f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_noyce, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:46:19 compute-0 systemd[1]: Started libpod-conmon-6a6c07429716e06262be07a46ae963cca5cfa08447d3f5dc255238d3d0383f8b.scope.
Oct  9 09:46:19 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f274325f1ce5a4e9a2216fb4983639f52451ad26d804288735ff67249ad075/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f274325f1ce5a4e9a2216fb4983639f52451ad26d804288735ff67249ad075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f274325f1ce5a4e9a2216fb4983639f52451ad26d804288735ff67249ad075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f274325f1ce5a4e9a2216fb4983639f52451ad26d804288735ff67249ad075/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f274325f1ce5a4e9a2216fb4983639f52451ad26d804288735ff67249ad075/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:19 compute-0 podman[126817]: 2025-10-09 09:46:19.518667449 +0000 UTC m=+0.108710253 container init 6a6c07429716e06262be07a46ae963cca5cfa08447d3f5dc255238d3d0383f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_noyce, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:46:19 compute-0 podman[126817]: 2025-10-09 09:46:19.525681419 +0000 UTC m=+0.115724213 container start 6a6c07429716e06262be07a46ae963cca5cfa08447d3f5dc255238d3d0383f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:46:19 compute-0 podman[126817]: 2025-10-09 09:46:19.432516162 +0000 UTC m=+0.022558976 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:46:19 compute-0 podman[126817]: 2025-10-09 09:46:19.527102571 +0000 UTC m=+0.117145375 container attach 6a6c07429716e06262be07a46ae963cca5cfa08447d3f5dc255238d3d0383f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:46:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:46:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:46:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:46:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:46:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:46:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:46:19 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:46:19 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:19 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:19 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:46:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:46:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:46:19 compute-0 happy_noyce[126830]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:46:19 compute-0 happy_noyce[126830]: --> All data devices are unavailable
Oct  9 09:46:19 compute-0 systemd[1]: libpod-6a6c07429716e06262be07a46ae963cca5cfa08447d3f5dc255238d3d0383f8b.scope: Deactivated successfully.
Oct  9 09:46:19 compute-0 podman[126817]: 2025-10-09 09:46:19.8419992 +0000 UTC m=+0.432041985 container died 6a6c07429716e06262be07a46ae963cca5cfa08447d3f5dc255238d3d0383f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 09:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f274325f1ce5a4e9a2216fb4983639f52451ad26d804288735ff67249ad075-merged.mount: Deactivated successfully.
Oct  9 09:46:19 compute-0 podman[126817]: 2025-10-09 09:46:19.871464342 +0000 UTC m=+0.461507136 container remove 6a6c07429716e06262be07a46ae963cca5cfa08447d3f5dc255238d3d0383f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Oct  9 09:46:19 compute-0 systemd[1]: libpod-conmon-6a6c07429716e06262be07a46ae963cca5cfa08447d3f5dc255238d3d0383f8b.scope: Deactivated successfully.
Oct  9 09:46:20 compute-0 podman[126988]: 2025-10-09 09:46:20.397219166 +0000 UTC m=+0.034757844 container create 4a19dace8e15072d51a8f0570e6c7e7948e7c1b57b03c0d17f35a1634f331f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:46:20 compute-0 systemd[1]: Started libpod-conmon-4a19dace8e15072d51a8f0570e6c7e7948e7c1b57b03c0d17f35a1634f331f8b.scope.
Oct  9 09:46:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:20 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:46:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:20 compute-0 podman[126988]: 2025-10-09 09:46:20.44838913 +0000 UTC m=+0.085927828 container init 4a19dace8e15072d51a8f0570e6c7e7948e7c1b57b03c0d17f35a1634f331f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:46:20 compute-0 podman[126988]: 2025-10-09 09:46:20.4532791 +0000 UTC m=+0.090817788 container start 4a19dace8e15072d51a8f0570e6c7e7948e7c1b57b03c0d17f35a1634f331f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:46:20 compute-0 podman[126988]: 2025-10-09 09:46:20.454778471 +0000 UTC m=+0.092317169 container attach 4a19dace8e15072d51a8f0570e6c7e7948e7c1b57b03c0d17f35a1634f331f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:46:20 compute-0 sharp_mirzakhani[127002]: 167 167
Oct  9 09:46:20 compute-0 systemd[1]: libpod-4a19dace8e15072d51a8f0570e6c7e7948e7c1b57b03c0d17f35a1634f331f8b.scope: Deactivated successfully.
Oct  9 09:46:20 compute-0 podman[126988]: 2025-10-09 09:46:20.457052343 +0000 UTC m=+0.094591021 container died 4a19dace8e15072d51a8f0570e6c7e7948e7c1b57b03c0d17f35a1634f331f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 09:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-75e30dcdcbb113074ce3f1ea4ecf47688d6cf48f0131010d3f0a7c01f3194310-merged.mount: Deactivated successfully.
Oct  9 09:46:20 compute-0 podman[126988]: 2025-10-09 09:46:20.481446081 +0000 UTC m=+0.118984749 container remove 4a19dace8e15072d51a8f0570e6c7e7948e7c1b57b03c0d17f35a1634f331f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_mirzakhani, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:46:20 compute-0 podman[126988]: 2025-10-09 09:46:20.385644297 +0000 UTC m=+0.023182994 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:46:20 compute-0 systemd[1]: libpod-conmon-4a19dace8e15072d51a8f0570e6c7e7948e7c1b57b03c0d17f35a1634f331f8b.scope: Deactivated successfully.
Oct  9 09:46:20 compute-0 podman[127099]: 2025-10-09 09:46:20.625521982 +0000 UTC m=+0.037191387 container create 84ed6d7ca0532fae9161022ae516a456d734db419a7a3ad973e4b4c3486ca41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_dhawan, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:46:20 compute-0 systemd[1]: Started libpod-conmon-84ed6d7ca0532fae9161022ae516a456d734db419a7a3ad973e4b4c3486ca41c.scope.
Oct  9 09:46:20 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c54ebcbfb7a1874f295971173f6af5daa51650f51c7bca1f32de615974acd82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c54ebcbfb7a1874f295971173f6af5daa51650f51c7bca1f32de615974acd82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c54ebcbfb7a1874f295971173f6af5daa51650f51c7bca1f32de615974acd82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c54ebcbfb7a1874f295971173f6af5daa51650f51c7bca1f32de615974acd82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:20 compute-0 podman[127099]: 2025-10-09 09:46:20.699344927 +0000 UTC m=+0.111014333 container init 84ed6d7ca0532fae9161022ae516a456d734db419a7a3ad973e4b4c3486ca41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_dhawan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:46:20 compute-0 podman[127099]: 2025-10-09 09:46:20.609195953 +0000 UTC m=+0.020865379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:46:20 compute-0 podman[127099]: 2025-10-09 09:46:20.714519913 +0000 UTC m=+0.126189318 container start 84ed6d7ca0532fae9161022ae516a456d734db419a7a3ad973e4b4c3486ca41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_dhawan, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:46:20 compute-0 podman[127099]: 2025-10-09 09:46:20.716066332 +0000 UTC m=+0.127735758 container attach 84ed6d7ca0532fae9161022ae516a456d734db419a7a3ad973e4b4c3486ca41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:46:20 compute-0 podman[127114]: 2025-10-09 09:46:20.751705498 +0000 UTC m=+0.080586904 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true)
Oct  9 09:46:20 compute-0 python3.9[127107]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]: {
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:    "1": [
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:        {
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "devices": [
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "/dev/loop3"
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            ],
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "lv_name": "ceph_lv0",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "lv_size": "21470642176",
Oct  9 09:46:20 compute-0 systemd[1]: Reloading.
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "name": "ceph_lv0",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "tags": {
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.cluster_name": "ceph",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.crush_device_class": "",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.encrypted": "0",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.osd_id": "1",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.type": "block",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.vdo": "0",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:                "ceph.with_tpm": "0"
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            },
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "type": "block",
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:            "vg_name": "ceph_vg0"
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:        }
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]:    ]
Oct  9 09:46:20 compute-0 compassionate_dhawan[127115]: }
Oct  9 09:46:20 compute-0 podman[127099]: 2025-10-09 09:46:20.951396514 +0000 UTC m=+0.363065939 container died 84ed6d7ca0532fae9161022ae516a456d734db419a7a3ad973e4b4c3486ca41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_dhawan, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:46:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:46:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:21.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:46:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:21 compute-0 systemd[1]: libpod-84ed6d7ca0532fae9161022ae516a456d734db419a7a3ad973e4b4c3486ca41c.scope: Deactivated successfully.
Oct  9 09:46:21 compute-0 systemd[1]: Starting dnf makecache...
Oct  9 09:46:21 compute-0 podman[127099]: 2025-10-09 09:46:21.228322606 +0000 UTC m=+0.639992011 container remove 84ed6d7ca0532fae9161022ae516a456d734db419a7a3ad973e4b4c3486ca41c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_dhawan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c54ebcbfb7a1874f295971173f6af5daa51650f51c7bca1f32de615974acd82-merged.mount: Deactivated successfully.
Oct  9 09:46:21 compute-0 systemd[1]: libpod-conmon-84ed6d7ca0532fae9161022ae516a456d734db419a7a3ad973e4b4c3486ca41c.scope: Deactivated successfully.
Oct  9 09:46:21 compute-0 dnf[127193]: Metadata cache refreshed recently.
Oct  9 09:46:21 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  9 09:46:21 compute-0 systemd[1]: Finished dnf makecache.
Oct  9 09:46:21 compute-0 podman[127426]: 2025-10-09 09:46:21.694097537 +0000 UTC m=+0.031549437 container create 67255c02bd321143d6024e533f050e9a8e97c686b3d3ce7a0888caad225fd50a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:46:21 compute-0 systemd[1]: Started libpod-conmon-67255c02bd321143d6024e533f050e9a8e97c686b3d3ce7a0888caad225fd50a.scope.
Oct  9 09:46:21 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:46:21 compute-0 podman[127426]: 2025-10-09 09:46:21.741958594 +0000 UTC m=+0.079410494 container init 67255c02bd321143d6024e533f050e9a8e97c686b3d3ce7a0888caad225fd50a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:46:21 compute-0 podman[127426]: 2025-10-09 09:46:21.747268477 +0000 UTC m=+0.084720377 container start 67255c02bd321143d6024e533f050e9a8e97c686b3d3ce7a0888caad225fd50a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:46:21 compute-0 podman[127426]: 2025-10-09 09:46:21.75019939 +0000 UTC m=+0.087651300 container attach 67255c02bd321143d6024e533f050e9a8e97c686b3d3ce7a0888caad225fd50a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:46:21 compute-0 agitated_sammet[127439]: 167 167
Oct  9 09:46:21 compute-0 systemd[1]: libpod-67255c02bd321143d6024e533f050e9a8e97c686b3d3ce7a0888caad225fd50a.scope: Deactivated successfully.
Oct  9 09:46:21 compute-0 podman[127426]: 2025-10-09 09:46:21.75170943 +0000 UTC m=+0.089161330 container died 67255c02bd321143d6024e533f050e9a8e97c686b3d3ce7a0888caad225fd50a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4970448570e76f898416958881e1b470bbb1e09d12631bdb2bb1f33d11330956-merged.mount: Deactivated successfully.
Oct  9 09:46:21 compute-0 podman[127426]: 2025-10-09 09:46:21.682480988 +0000 UTC m=+0.019932909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:46:21 compute-0 podman[127426]: 2025-10-09 09:46:21.778884268 +0000 UTC m=+0.116336169 container remove 67255c02bd321143d6024e533f050e9a8e97c686b3d3ce7a0888caad225fd50a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:46:21 compute-0 systemd[1]: libpod-conmon-67255c02bd321143d6024e533f050e9a8e97c686b3d3ce7a0888caad225fd50a.scope: Deactivated successfully.
Oct  9 09:46:21 compute-0 python3.9[127424]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:21 compute-0 podman[127460]: 2025-10-09 09:46:21.936238265 +0000 UTC m=+0.049347866 container create 2e619f400680c98fdcdae788356c7111584e7087e0a0a3333080b442608b556d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:46:21 compute-0 systemd[1]: Reloading.
Oct  9 09:46:22 compute-0 podman[127460]: 2025-10-09 09:46:21.918619185 +0000 UTC m=+0.031728806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:46:22 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:22 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:22 compute-0 systemd[1]: Started libpod-conmon-2e619f400680c98fdcdae788356c7111584e7087e0a0a3333080b442608b556d.scope.
Oct  9 09:46:22 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c482677a021bfd406a70b209d8d33e03a56145e2c5fc8783ffe3e95a8b695d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c482677a021bfd406a70b209d8d33e03a56145e2c5fc8783ffe3e95a8b695d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c482677a021bfd406a70b209d8d33e03a56145e2c5fc8783ffe3e95a8b695d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c482677a021bfd406a70b209d8d33e03a56145e2c5fc8783ffe3e95a8b695d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:46:22 compute-0 podman[127460]: 2025-10-09 09:46:22.256721409 +0000 UTC m=+0.369831029 container init 2e619f400680c98fdcdae788356c7111584e7087e0a0a3333080b442608b556d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:46:22 compute-0 podman[127460]: 2025-10-09 09:46:22.262926763 +0000 UTC m=+0.376036363 container start 2e619f400680c98fdcdae788356c7111584e7087e0a0a3333080b442608b556d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:46:22 compute-0 podman[127460]: 2025-10-09 09:46:22.265410833 +0000 UTC m=+0.378520432 container attach 2e619f400680c98fdcdae788356c7111584e7087e0a0a3333080b442608b556d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 09:46:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:22] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct  9 09:46:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:22] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct  9 09:46:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 165 op/s
Oct  9 09:46:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:22.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:22 compute-0 bold_poitras[127512]: {}
Oct  9 09:46:22 compute-0 lvm[127742]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:46:22 compute-0 lvm[127742]: VG ceph_vg0 finished
Oct  9 09:46:22 compute-0 systemd[1]: libpod-2e619f400680c98fdcdae788356c7111584e7087e0a0a3333080b442608b556d.scope: Deactivated successfully.
Oct  9 09:46:22 compute-0 podman[127460]: 2025-10-09 09:46:22.807034843 +0000 UTC m=+0.920144463 container died 2e619f400680c98fdcdae788356c7111584e7087e0a0a3333080b442608b556d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-22c482677a021bfd406a70b209d8d33e03a56145e2c5fc8783ffe3e95a8b695d-merged.mount: Deactivated successfully.
Oct  9 09:46:22 compute-0 lvm[127745]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:46:22 compute-0 lvm[127745]: VG ceph_vg0 finished
Oct  9 09:46:22 compute-0 podman[127460]: 2025-10-09 09:46:22.842851213 +0000 UTC m=+0.955960813 container remove 2e619f400680c98fdcdae788356c7111584e7087e0a0a3333080b442608b556d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_poitras, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:46:22 compute-0 systemd[1]: libpod-conmon-2e619f400680c98fdcdae788356c7111584e7087e0a0a3333080b442608b556d.scope: Deactivated successfully.
Oct  9 09:46:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:46:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:46:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:22 compute-0 python3.9[127706]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:22 compute-0 systemd[1]: Reloading.
Oct  9 09:46:23 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:23 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:46:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:23.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:46:23 compute-0 auditd[734]: Audit daemon rotating log files
Oct  9 09:46:23 compute-0 python3.9[127968]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:23 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:23 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:23 compute-0 systemd[1]: Reloading.
Oct  9 09:46:23 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:23 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 0 B/s wr, 164 op/s
Oct  9 09:46:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:24.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:24 compute-0 python3.9[128160]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:24 compute-0 systemd[1]: Reloading.
Oct  9 09:46:24 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:24 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:46:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:25.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:46:25 compute-0 python3.9[128350]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:25 compute-0 systemd[1]: Reloading.
Oct  9 09:46:25 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:25 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 165 op/s
Oct  9 09:46:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:26.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:26 compute-0 python3.9[128542]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:26 compute-0 systemd[1]: Reloading.
Oct  9 09:46:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:26.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:27.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:27.008Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:27.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:27 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:27 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:27.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:27 compute-0 python3.9[128732]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 0 B/s wr, 164 op/s
Oct  9 09:46:28 compute-0 python3.9[128888]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:28.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:28 compute-0 systemd[1]: Reloading.
Oct  9 09:46:28 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:28 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:29.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:29 compute-0 python3.9[129079]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:29 compute-0 systemd[1]: Reloading.
Oct  9 09:46:29 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:29 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:29 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  9 09:46:29 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  9 09:46:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 0 B/s wr, 164 op/s
Oct  9 09:46:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:30.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:30 compute-0 python3.9[129297]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:31.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:31 compute-0 python3.9[129453]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:31 compute-0 python3.9[129608]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:32] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct  9 09:46:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:32] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Oct  9 09:46:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 165 op/s
Oct  9 09:46:32 compute-0 python3.9[129764]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:32.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:33 compute-0 python3.9[129920]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:46:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:33.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:46:33 compute-0 python3.9[130075]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:46:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:34.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:46:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:46:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:46:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:46:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:35.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:46:35 compute-0 python3.9[130232]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:36 compute-0 python3.9[130387]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:46:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:36.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:36 compute-0 python3.9[130543]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:36.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:37.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:37.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:37.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:37.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:37 compute-0 python3.9[130699]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:37 compute-0 python3.9[130854]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:38.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:38 compute-0 python3.9[131010]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:39.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:39 compute-0 python3.9[131166]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:39 compute-0 python3.9[131321]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:40.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:40 compute-0 python3.9[131477]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:40 compute-0 podman[131602]: 2025-10-09 09:46:40.961993538 +0000 UTC m=+0.042349253 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  9 09:46:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:41.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:41 compute-0 python3.9[131646]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:41 compute-0 python3.9[131798]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:42 compute-0 python3.9[131950]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:42] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct  9 09:46:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:42] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Oct  9 09:46:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:46:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:42.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:42 compute-0 python3.9[132103]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:43 compute-0 python3.9[132256]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:43.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:43 compute-0 python3.9[132408]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:44 compute-0 python3.9[132534]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003203.2453995-1622-122824375835790/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:46:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:44.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:44 compute-0 python3.9[132687]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:45.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:45 compute-0 python3.9[132812]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003204.418354-1622-205145279603717/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:45 compute-0 python3.9[132964]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:46 compute-0 python3.9[133089]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003205.3000195-1622-15024804985449/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:46:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:46:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:46.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:46:46 compute-0 python3.9[133242]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:46 compute-0 python3.9[133368]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003206.1848423-1622-41022900899964/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:46.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:47.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:47.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:47.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:47.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:47 compute-0 python3.9[133520]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:47 compute-0 python3.9[133645]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003207.1047847-1622-279974989135592/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:48 compute-0 python3.9[133798]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:46:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:48.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:48 compute-0 python3.9[133924]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003207.9855993-1622-201335172538974/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:49.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:49 compute-0 python3.9[134076]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:46:49
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'vms', 'volumes', '.nfs', 'backups', 'default.rgw.log']
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:46:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:46:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:46:49 compute-0 python3.9[134199]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003208.8545752-1622-234995063352433/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:46:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:46:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:50 compute-0 python3.9[134376]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:46:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:50.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:50 compute-0 python3.9[134502]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003209.7194061-1622-242227217849119/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:50 compute-0 podman[134627]: 2025-10-09 09:46:50.88627782 +0000 UTC m=+0.054319870 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  9 09:46:51 compute-0 python3.9[134671]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  9 09:46:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:51.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:51 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Check health
Oct  9 09:46:51 compute-0 python3.9[134831]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:52 compute-0 python3.9[134983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:52] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  9 09:46:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:46:52] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  9 09:46:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:46:52 compute-0 python3.9[135136]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:46:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:52.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:46:52 compute-0 python3.9[135289]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:53.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:53 compute-0 python3.9[135441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:53 compute-0 python3.9[135593]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:54 compute-0 python3.9[135746]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:46:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:54.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:46:54 compute-0 python3.9[135899]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:55.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:55 compute-0 python3.9[136051]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:55 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  9 09:46:55 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  9 09:46:55 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  9 09:46:55 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  9 09:46:55 compute-0 python3.9[136204]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:56 compute-0 python3.9[136357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:46:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:56.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:56 compute-0 python3.9[136510]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:56 compute-0 python3.9[136663]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:56.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:57.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:57.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:46:57.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:46:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:57.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:57 compute-0 python3.9[136815]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:57 compute-0 python3.9[136967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:58 compute-0 python3.9[137091]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003217.594363-2285-196740895358399/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:46:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:58.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:58 compute-0 python3.9[137244]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:46:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:46:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:46:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:59.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:46:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:46:59 compute-0 python3.9[137367]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003218.446696-2285-250786636659822/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:59 compute-0 python3.9[137519]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:00 compute-0 python3.9[137642]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003219.2928736-2285-24120236139358/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:00 compute-0 python3.9[137795]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:00 compute-0 python3.9[137919]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003220.1372783-2285-183685627037653/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:01.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:01 compute-0 python3.9[138071]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:01 compute-0 python3.9[138194]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003220.9952862-2285-211481675618078/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:02 compute-0 python3.9[138347]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:02] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  9 09:47:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:02] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  9 09:47:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:47:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:47:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:02.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:47:02 compute-0 python3.9[138470]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003221.8651767-2285-209521105688595/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:03 compute-0 python3.9[138623]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:03.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:03 compute-0 python3.9[138746]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003222.7581816-2285-171547496858712/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:03 compute-0 python3.9[138898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:04 compute-0 python3.9[139022]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003223.63713-2285-162660210820653/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:47:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:04.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:47:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:47:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:47:04 compute-0 python3.9[139175]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:05.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:05 compute-0 python3.9[139298]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003224.5089426-2285-250078468279609/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:05 compute-0 python3.9[139450]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:06 compute-0 python3.9[139574]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003225.4215057-2285-94645406022248/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:47:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:06.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:06 compute-0 python3.9[139727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:06.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:07.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:07.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:07.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:07.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:07 compute-0 python3.9[139850]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003226.3521583-2285-194600379108148/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:07 compute-0 python3.9[140002]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:08 compute-0 python3.9[140125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003227.318233-2285-171871352309343/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:08.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:08 compute-0 python3.9[140278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:08 compute-0 python3.9[140402]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003228.1946003-2285-198764667347346/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:09.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:09 compute-0 python3.9[140554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:09 compute-0 python3.9[140677]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003229.0448816-2285-52024216993417/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:47:10.097 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:47:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:47:10.098 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:47:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:47:10.098 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:47:10 compute-0 python3.9[140853]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:10.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:11 compute-0 python3.9[141009]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  9 09:47:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:11.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:11 compute-0 podman[141010]: 2025-10-09 09:47:11.630752992 +0000 UTC m=+0.067841335 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:47:11 compute-0 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 09:47:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:12] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  9 09:47:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:12] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  9 09:47:12 compute-0 python3.9[141183]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:47:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:12.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:12 compute-0 python3.9[141336]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:13.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:13 compute-0 python3.9[141488]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:13 compute-0 python3.9[141640]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:14 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:14 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:14 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:14 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:14 compute-0 python3.9[141792]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:14.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:14 compute-0 python3.9[141945]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:15 compute-0 python3.9[142098]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:15.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:15 compute-0 python3.9[142250]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:16 compute-0 python3.9[142402]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:47:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:16.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:16 compute-0 python3.9[142556]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:16.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:17.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:17.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:17.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:17 compute-0 python3.9[142708]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:17 compute-0 systemd[1]: Reloading.
Oct  9 09:47:17 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:17 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:17 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct  9 09:47:17 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct  9 09:47:17 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  9 09:47:17 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  9 09:47:17 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct  9 09:47:17 compute-0 systemd[1]: Started libvirt logging daemon.
Oct  9 09:47:18 compute-0 python3.9[142903]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:18 compute-0 systemd[1]: Reloading.
Oct  9 09:47:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:18 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:18 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:18.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:18 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  9 09:47:18 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  9 09:47:18 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  9 09:47:18 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  9 09:47:18 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  9 09:47:18 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  9 09:47:18 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct  9 09:47:18 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct  9 09:47:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:19 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  9 09:47:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:19.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:19 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  9 09:47:19 compute-0 python3.9[143119]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:19 compute-0 systemd[1]: Reloading.
Oct  9 09:47:19 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:19 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:19 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  9 09:47:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:47:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:47:19 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  9 09:47:19 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  9 09:47:19 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  9 09:47:19 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct  9 09:47:19 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct  9 09:47:19 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  9 09:47:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:47:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:47:19 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  9 09:47:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:47:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:47:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:47:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:47:20 compute-0 python3.9[143338]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:20 compute-0 systemd[1]: Reloading.
Oct  9 09:47:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:20 compute-0 setroubleshoot[143120]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4388d5a8-40c8-4c94-9402-0691f97e33c2
Oct  9 09:47:20 compute-0 setroubleshoot[143120]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  9 09:47:20 compute-0 setroubleshoot[143120]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4388d5a8-40c8-4c94-9402-0691f97e33c2
Oct  9 09:47:20 compute-0 setroubleshoot[143120]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  9 09:47:20 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct  9 09:47:20 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  9 09:47:20 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  9 09:47:20 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  9 09:47:20 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  9 09:47:20 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  9 09:47:20 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  9 09:47:20 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  9 09:47:20 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  9 09:47:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:20 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  9 09:47:20 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct  9 09:47:20 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct  9 09:47:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:20.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:21 compute-0 python3.9[143553]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:21 compute-0 systemd[1]: Reloading.
Oct  9 09:47:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:21 compute-0 podman[143555]: 2025-10-09 09:47:21.151750034 +0000 UTC m=+0.086093154 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:47:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:21.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:21 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct  9 09:47:21 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct  9 09:47:21 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  9 09:47:21 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  9 09:47:21 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  9 09:47:21 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  9 09:47:21 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct  9 09:47:21 compute-0 systemd[1]: Started libvirt secret daemon.
Oct  9 09:47:21 compute-0 python3.9[143786]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:22] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Oct  9 09:47:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:22] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Oct  9 09:47:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:47:22 compute-0 python3.9[143939]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:47:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:22.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:22 compute-0 python3.9[144092]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:23.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:23 compute-0 python3.9[144311]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:47:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  9 09:47:23 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 09:47:23 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  9 09:47:23 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:47:23 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:47:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:47:23 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:47:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:47:23 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:47:23 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:47:23 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:47:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:47:23 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:47:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:47:23 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:47:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:24 compute-0 podman[144486]: 2025-10-09 09:47:24.049746033 +0000 UTC m=+0.029174968 container create 806cf9484aa0e5851b6975de5e67b02ea9cd4fea0088c7ac94747e63be47bb2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_liskov, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:47:24 compute-0 systemd[1]: Started libpod-conmon-806cf9484aa0e5851b6975de5e67b02ea9cd4fea0088c7ac94747e63be47bb2a.scope.
Oct  9 09:47:24 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:47:24 compute-0 podman[144486]: 2025-10-09 09:47:24.104467822 +0000 UTC m=+0.083896776 container init 806cf9484aa0e5851b6975de5e67b02ea9cd4fea0088c7ac94747e63be47bb2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_liskov, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 09:47:24 compute-0 podman[144486]: 2025-10-09 09:47:24.109121601 +0000 UTC m=+0.088550535 container start 806cf9484aa0e5851b6975de5e67b02ea9cd4fea0088c7ac94747e63be47bb2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_liskov, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:47:24 compute-0 podman[144486]: 2025-10-09 09:47:24.110217942 +0000 UTC m=+0.089646886 container attach 806cf9484aa0e5851b6975de5e67b02ea9cd4fea0088c7ac94747e63be47bb2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_liskov, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:47:24 compute-0 musing_liskov[144522]: 167 167
Oct  9 09:47:24 compute-0 systemd[1]: libpod-806cf9484aa0e5851b6975de5e67b02ea9cd4fea0088c7ac94747e63be47bb2a.scope: Deactivated successfully.
Oct  9 09:47:24 compute-0 podman[144486]: 2025-10-09 09:47:24.112803666 +0000 UTC m=+0.092232600 container died 806cf9484aa0e5851b6975de5e67b02ea9cd4fea0088c7ac94747e63be47bb2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d9e1075c4ce965297ff3f813403d22c7f95ac018b178212e931434582aa828f-merged.mount: Deactivated successfully.
Oct  9 09:47:24 compute-0 podman[144486]: 2025-10-09 09:47:24.131361549 +0000 UTC m=+0.110790483 container remove 806cf9484aa0e5851b6975de5e67b02ea9cd4fea0088c7ac94747e63be47bb2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_liskov, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:47:24 compute-0 podman[144486]: 2025-10-09 09:47:24.038958107 +0000 UTC m=+0.018387061 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:47:24 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:24 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:24 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:24 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:47:24 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:24 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:24 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:47:24 compute-0 systemd[1]: libpod-conmon-806cf9484aa0e5851b6975de5e67b02ea9cd4fea0088c7ac94747e63be47bb2a.scope: Deactivated successfully.
Oct  9 09:47:24 compute-0 podman[144594]: 2025-10-09 09:47:24.255039752 +0000 UTC m=+0.029115665 container create aeaebb7930c6574457fe680c92c1829718a0f2b507542f974cb55dff6ff78799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_jones, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:47:24 compute-0 systemd[1]: Started libpod-conmon-aeaebb7930c6574457fe680c92c1829718a0f2b507542f974cb55dff6ff78799.scope.
Oct  9 09:47:24 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6681cbff004b876eb4bb84da42edc6d767d15d252adb3268577a8c1c355185d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6681cbff004b876eb4bb84da42edc6d767d15d252adb3268577a8c1c355185d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6681cbff004b876eb4bb84da42edc6d767d15d252adb3268577a8c1c355185d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6681cbff004b876eb4bb84da42edc6d767d15d252adb3268577a8c1c355185d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6681cbff004b876eb4bb84da42edc6d767d15d252adb3268577a8c1c355185d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:24 compute-0 podman[144594]: 2025-10-09 09:47:24.326843664 +0000 UTC m=+0.100919578 container init aeaebb7930c6574457fe680c92c1829718a0f2b507542f974cb55dff6ff78799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_jones, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:47:24 compute-0 podman[144594]: 2025-10-09 09:47:24.33235763 +0000 UTC m=+0.106433543 container start aeaebb7930c6574457fe680c92c1829718a0f2b507542f974cb55dff6ff78799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:47:24 compute-0 python3.9[144588]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:24 compute-0 podman[144594]: 2025-10-09 09:47:24.340205552 +0000 UTC m=+0.114281475 container attach aeaebb7930c6574457fe680c92c1829718a0f2b507542f974cb55dff6ff78799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:47:24 compute-0 podman[144594]: 2025-10-09 09:47:24.243432187 +0000 UTC m=+0.017508119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:47:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:24.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:24 compute-0 cranky_jones[144607]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:47:24 compute-0 cranky_jones[144607]: --> All data devices are unavailable
Oct  9 09:47:24 compute-0 systemd[1]: libpod-aeaebb7930c6574457fe680c92c1829718a0f2b507542f974cb55dff6ff78799.scope: Deactivated successfully.
Oct  9 09:47:24 compute-0 conmon[144607]: conmon aeaebb7930c6574457fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aeaebb7930c6574457fe680c92c1829718a0f2b507542f974cb55dff6ff78799.scope/container/memory.events
Oct  9 09:47:24 compute-0 podman[144594]: 2025-10-09 09:47:24.59302617 +0000 UTC m=+0.367102083 container died aeaebb7930c6574457fe680c92c1829718a0f2b507542f974cb55dff6ff78799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6681cbff004b876eb4bb84da42edc6d767d15d252adb3268577a8c1c355185d-merged.mount: Deactivated successfully.
Oct  9 09:47:24 compute-0 podman[144594]: 2025-10-09 09:47:24.618771136 +0000 UTC m=+0.392847050 container remove aeaebb7930c6574457fe680c92c1829718a0f2b507542f974cb55dff6ff78799 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_jones, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:47:24 compute-0 systemd[1]: libpod-conmon-aeaebb7930c6574457fe680c92c1829718a0f2b507542f974cb55dff6ff78799.scope: Deactivated successfully.
Oct  9 09:47:24 compute-0 python3.9[144743]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003243.9540887-3359-60929861072203/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c150843fcb80d0d0a9968a12abeb036b918e43ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:25 compute-0 podman[144916]: 2025-10-09 09:47:25.042888749 +0000 UTC m=+0.031572985 container create 0b6c9f4372ee65cc9c392eb9765bda8be199b762e1ac2052882506d537385419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ritchie, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:47:25 compute-0 systemd[1]: Started libpod-conmon-0b6c9f4372ee65cc9c392eb9765bda8be199b762e1ac2052882506d537385419.scope.
Oct  9 09:47:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:47:25 compute-0 podman[144916]: 2025-10-09 09:47:25.099947882 +0000 UTC m=+0.088632129 container init 0b6c9f4372ee65cc9c392eb9765bda8be199b762e1ac2052882506d537385419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 09:47:25 compute-0 podman[144916]: 2025-10-09 09:47:25.105250808 +0000 UTC m=+0.093935045 container start 0b6c9f4372ee65cc9c392eb9765bda8be199b762e1ac2052882506d537385419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ritchie, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 09:47:25 compute-0 podman[144916]: 2025-10-09 09:47:25.106438721 +0000 UTC m=+0.095122978 container attach 0b6c9f4372ee65cc9c392eb9765bda8be199b762e1ac2052882506d537385419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ritchie, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:47:25 compute-0 exciting_ritchie[144968]: 167 167
Oct  9 09:47:25 compute-0 systemd[1]: libpod-0b6c9f4372ee65cc9c392eb9765bda8be199b762e1ac2052882506d537385419.scope: Deactivated successfully.
Oct  9 09:47:25 compute-0 podman[144916]: 2025-10-09 09:47:25.108846009 +0000 UTC m=+0.097530245 container died 0b6c9f4372ee65cc9c392eb9765bda8be199b762e1ac2052882506d537385419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ritchie, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f826625885224d82bf7d3cf03b581ccd476dba32c5bb95a8a2bb1428368893f1-merged.mount: Deactivated successfully.
Oct  9 09:47:25 compute-0 podman[144916]: 2025-10-09 09:47:25.030990255 +0000 UTC m=+0.019674492 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:47:25 compute-0 podman[144916]: 2025-10-09 09:47:25.130398046 +0000 UTC m=+0.119082283 container remove 0b6c9f4372ee65cc9c392eb9765bda8be199b762e1ac2052882506d537385419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Oct  9 09:47:25 compute-0 systemd[1]: libpod-conmon-0b6c9f4372ee65cc9c392eb9765bda8be199b762e1ac2052882506d537385419.scope: Deactivated successfully.
Oct  9 09:47:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:25.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:25 compute-0 podman[145019]: 2025-10-09 09:47:25.258709952 +0000 UTC m=+0.031985986 container create 8aff0e49b22106026565f422b0a576895871612c4739fe3077e1cb493ae965cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:47:25 compute-0 systemd[1]: Started libpod-conmon-8aff0e49b22106026565f422b0a576895871612c4739fe3077e1cb493ae965cb.scope.
Oct  9 09:47:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/245c3e5135775501070fda71689f79f7d88421758a3c7b20d0ccc2b702177a52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/245c3e5135775501070fda71689f79f7d88421758a3c7b20d0ccc2b702177a52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/245c3e5135775501070fda71689f79f7d88421758a3c7b20d0ccc2b702177a52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/245c3e5135775501070fda71689f79f7d88421758a3c7b20d0ccc2b702177a52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:25 compute-0 python3.9[145013]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 286f8bf0-da72-5823-9a4e-ac4457d9e609#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:25 compute-0 podman[145019]: 2025-10-09 09:47:25.318558674 +0000 UTC m=+0.091834719 container init 8aff0e49b22106026565f422b0a576895871612c4739fe3077e1cb493ae965cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:47:25 compute-0 podman[145019]: 2025-10-09 09:47:25.324377554 +0000 UTC m=+0.097653590 container start 8aff0e49b22106026565f422b0a576895871612c4739fe3077e1cb493ae965cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:47:25 compute-0 podman[145019]: 2025-10-09 09:47:25.329225971 +0000 UTC m=+0.102502027 container attach 8aff0e49b22106026565f422b0a576895871612c4739fe3077e1cb493ae965cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:47:25 compute-0 podman[145019]: 2025-10-09 09:47:25.246486384 +0000 UTC m=+0.019762418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:47:25 compute-0 cool_liskov[145032]: {
Oct  9 09:47:25 compute-0 cool_liskov[145032]:    "1": [
Oct  9 09:47:25 compute-0 cool_liskov[145032]:        {
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "devices": [
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "/dev/loop3"
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            ],
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "lv_name": "ceph_lv0",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "lv_size": "21470642176",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "name": "ceph_lv0",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "tags": {
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.cluster_name": "ceph",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.crush_device_class": "",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.encrypted": "0",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.osd_id": "1",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.type": "block",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.vdo": "0",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:                "ceph.with_tpm": "0"
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            },
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "type": "block",
Oct  9 09:47:25 compute-0 cool_liskov[145032]:            "vg_name": "ceph_vg0"
Oct  9 09:47:25 compute-0 cool_liskov[145032]:        }
Oct  9 09:47:25 compute-0 cool_liskov[145032]:    ]
Oct  9 09:47:25 compute-0 cool_liskov[145032]: }
Oct  9 09:47:25 compute-0 systemd[1]: libpod-8aff0e49b22106026565f422b0a576895871612c4739fe3077e1cb493ae965cb.scope: Deactivated successfully.
Oct  9 09:47:25 compute-0 podman[145019]: 2025-10-09 09:47:25.568013503 +0000 UTC m=+0.341289538 container died 8aff0e49b22106026565f422b0a576895871612c4739fe3077e1cb493ae965cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_liskov, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-245c3e5135775501070fda71689f79f7d88421758a3c7b20d0ccc2b702177a52-merged.mount: Deactivated successfully.
Oct  9 09:47:25 compute-0 podman[145019]: 2025-10-09 09:47:25.589616888 +0000 UTC m=+0.362892922 container remove 8aff0e49b22106026565f422b0a576895871612c4739fe3077e1cb493ae965cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:47:25 compute-0 systemd[1]: libpod-conmon-8aff0e49b22106026565f422b0a576895871612c4739fe3077e1cb493ae965cb.scope: Deactivated successfully.
Oct  9 09:47:25 compute-0 python3.9[145262]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:26 compute-0 podman[145318]: 2025-10-09 09:47:26.014532067 +0000 UTC m=+0.028257313 container create 126731d06c0a87c247e33ad19c471f607bffaef74e66a60bac1e960a62c987c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_darwin, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:47:26 compute-0 systemd[1]: Started libpod-conmon-126731d06c0a87c247e33ad19c471f607bffaef74e66a60bac1e960a62c987c4.scope.
Oct  9 09:47:26 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:47:26 compute-0 podman[145318]: 2025-10-09 09:47:26.059624177 +0000 UTC m=+0.073349443 container init 126731d06c0a87c247e33ad19c471f607bffaef74e66a60bac1e960a62c987c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:47:26 compute-0 podman[145318]: 2025-10-09 09:47:26.06522179 +0000 UTC m=+0.078947037 container start 126731d06c0a87c247e33ad19c471f607bffaef74e66a60bac1e960a62c987c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_darwin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:47:26 compute-0 podman[145318]: 2025-10-09 09:47:26.066456041 +0000 UTC m=+0.080181288 container attach 126731d06c0a87c247e33ad19c471f607bffaef74e66a60bac1e960a62c987c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_darwin, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:47:26 compute-0 stupefied_darwin[145332]: 167 167
Oct  9 09:47:26 compute-0 systemd[1]: libpod-126731d06c0a87c247e33ad19c471f607bffaef74e66a60bac1e960a62c987c4.scope: Deactivated successfully.
Oct  9 09:47:26 compute-0 podman[145318]: 2025-10-09 09:47:26.069197439 +0000 UTC m=+0.082922675 container died 126731d06c0a87c247e33ad19c471f607bffaef74e66a60bac1e960a62c987c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 09:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4702e8af1f066c46075196b6dc163c47fbae3c23b932addbe32129fcec08939a-merged.mount: Deactivated successfully.
Oct  9 09:47:26 compute-0 podman[145318]: 2025-10-09 09:47:26.089660509 +0000 UTC m=+0.103385756 container remove 126731d06c0a87c247e33ad19c471f607bffaef74e66a60bac1e960a62c987c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:47:26 compute-0 podman[145318]: 2025-10-09 09:47:26.003430709 +0000 UTC m=+0.017155966 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:47:26 compute-0 systemd[1]: libpod-conmon-126731d06c0a87c247e33ad19c471f607bffaef74e66a60bac1e960a62c987c4.scope: Deactivated successfully.
Oct  9 09:47:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:26 compute-0 podman[145418]: 2025-10-09 09:47:26.22010935 +0000 UTC m=+0.030168584 container create 8733c274af19fdd8e55863fda0502533ea92c35a1feaa6d6e83ae7ce5fb3dec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:47:26 compute-0 systemd[1]: Started libpod-conmon-8733c274af19fdd8e55863fda0502533ea92c35a1feaa6d6e83ae7ce5fb3dec5.scope.
Oct  9 09:47:26 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8933c450f69038e55f1a6694f6a6792e1eaa193e6509a95bef5cb5f79bd690/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8933c450f69038e55f1a6694f6a6792e1eaa193e6509a95bef5cb5f79bd690/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8933c450f69038e55f1a6694f6a6792e1eaa193e6509a95bef5cb5f79bd690/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8933c450f69038e55f1a6694f6a6792e1eaa193e6509a95bef5cb5f79bd690/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:47:26 compute-0 podman[145418]: 2025-10-09 09:47:26.274564886 +0000 UTC m=+0.084624139 container init 8733c274af19fdd8e55863fda0502533ea92c35a1feaa6d6e83ae7ce5fb3dec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 09:47:26 compute-0 podman[145418]: 2025-10-09 09:47:26.27998807 +0000 UTC m=+0.090047303 container start 8733c274af19fdd8e55863fda0502533ea92c35a1feaa6d6e83ae7ce5fb3dec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_lovelace, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:47:26 compute-0 podman[145418]: 2025-10-09 09:47:26.281391751 +0000 UTC m=+0.091450984 container attach 8733c274af19fdd8e55863fda0502533ea92c35a1feaa6d6e83ae7ce5fb3dec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_lovelace, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:47:26 compute-0 podman[145418]: 2025-10-09 09:47:26.20864119 +0000 UTC m=+0.018700443 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:47:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:47:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:26.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:26 compute-0 lvm[145673]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:47:26 compute-0 gifted_lovelace[145457]: {}
Oct  9 09:47:26 compute-0 lvm[145673]: VG ceph_vg0 finished
Oct  9 09:47:26 compute-0 systemd[1]: libpod-8733c274af19fdd8e55863fda0502533ea92c35a1feaa6d6e83ae7ce5fb3dec5.scope: Deactivated successfully.
Oct  9 09:47:26 compute-0 podman[145418]: 2025-10-09 09:47:26.788943763 +0000 UTC m=+0.599003006 container died 8733c274af19fdd8e55863fda0502533ea92c35a1feaa6d6e83ae7ce5fb3dec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 09:47:26 compute-0 lvm[145692]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:47:26 compute-0 lvm[145692]: VG ceph_vg0 finished
Oct  9 09:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea8933c450f69038e55f1a6694f6a6792e1eaa193e6509a95bef5cb5f79bd690-merged.mount: Deactivated successfully.
Oct  9 09:47:26 compute-0 podman[145418]: 2025-10-09 09:47:26.812020902 +0000 UTC m=+0.622080135 container remove 8733c274af19fdd8e55863fda0502533ea92c35a1feaa6d6e83ae7ce5fb3dec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:47:26 compute-0 systemd[1]: libpod-conmon-8733c274af19fdd8e55863fda0502533ea92c35a1feaa6d6e83ae7ce5fb3dec5.scope: Deactivated successfully.
Oct  9 09:47:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:47:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:47:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:26.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:27.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:27.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:27.009Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:47:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:27.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:47:27 compute-0 python3.9[145920]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:27 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:27 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:28 compute-0 python3.9[146072]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:28.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:28 compute-0 python3.9[146197]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003247.7461383-3524-56181947406118/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:29 compute-0 python3.9[146349]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:29 compute-0 python3.9[146501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:30 compute-0 python3.9[146605]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:30 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  9 09:47:30 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  9 09:47:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:30.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:30 compute-0 python3.9[146758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:31.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:31 compute-0 python3.9[146836]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.pde7s1rm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:31 compute-0 python3.9[146988]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:32 compute-0 python3.9[147066]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:32] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Oct  9 09:47:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:32] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Oct  9 09:47:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:47:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:47:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:47:32 compute-0 python3.9[147220]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:33.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:33 compute-0 python3[147373]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 09:47:33 compute-0 python3.9[147525]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:34 compute-0 python3.9[147604]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:47:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:47:34 compute-0 python3.9[147757]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:35 compute-0 python3.9[147835]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:35.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:35 compute-0 python3.9[147987]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:35 compute-0 python3.9[148065]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=cleanup t=2025-10-09T09:47:36.391785442Z level=info msg="Completed cleanup jobs" duration=4.496123ms
Oct  9 09:47:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=sqlstore.transactions t=2025-10-09T09:47:36.399373634Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Oct  9 09:47:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:47:36 compute-0 python3.9[148218]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=grafana.update.checker t=2025-10-09T09:47:36.488749226Z level=info msg="Update check succeeded" duration=44.383211ms
Oct  9 09:47:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=plugins.update.checker t=2025-10-09T09:47:36.497505715Z level=info msg="Update check succeeded" duration=48.546715ms
Oct  9 09:47:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:36.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:36 compute-0 python3.9[148297]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:37.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:37.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:37.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:37.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:37.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:37 compute-0 python3.9[148449]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:37 compute-0 python3.9[148574]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003256.9820318-3899-144980638650620/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:38 compute-0 python3.9[148727]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:38 compute-0 python3.9[148880]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:39.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:39 compute-0 python3.9[149035]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:40 compute-0 python3.9[149187]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:40.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:40 compute-0 python3.9[149341]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:47:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:41.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:41 compute-0 python3.9[149496]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:41 compute-0 python3.9[149651]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:42 compute-0 podman[149776]: 2025-10-09 09:47:42.118759277 +0000 UTC m=+0.055769689 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:47:42 compute-0 python3.9[149814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:42] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  9 09:47:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:42] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Oct  9 09:47:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:47:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:42.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:42 compute-0 python3.9[149944]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003261.871007-4115-106203085367705/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:43.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:43 compute-0 python3.9[150096]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:43 compute-0 python3.9[150219]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003262.861519-4160-97510314411257/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:44 compute-0 python3.9[150372]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:47:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:44 compute-0 python3.9[150496]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003263.8402689-4205-95932711662195/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:47:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:45.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:47:45 compute-0 python3.9[150648]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:47:45 compute-0 systemd[1]: Reloading.
Oct  9 09:47:45 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:45 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:45 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct  9 09:47:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:46 compute-0 python3.9[150840]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  9 09:47:46 compute-0 systemd[1]: Reloading.
Oct  9 09:47:46 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:46 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:47:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:46 compute-0 systemd[1]: Reloading.
Oct  9 09:47:46 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:46 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:47.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:47.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:47.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:47.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:47.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:47 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Oct  9 09:47:47 compute-0 systemd[1]: session-37.scope: Consumed 2min 28.510s CPU time.
Oct  9 09:47:47 compute-0 systemd-logind[798]: Session 37 logged out. Waiting for processes to exit.
Oct  9 09:47:47 compute-0 systemd-logind[798]: Removed session 37.
Oct  9 09:47:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:47:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:48.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:49.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:47:49
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', '.nfs', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'volumes']
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:47:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:47:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:47:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:47:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:47:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:50.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:51.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:51 compute-0 podman[150967]: 2025-10-09 09:47:51.631896421 +0000 UTC m=+0.072136472 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  9 09:47:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:52] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  9 09:47:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:47:52] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  9 09:47:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:47:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:52.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:52 compute-0 systemd-logind[798]: New session 38 of user zuul.
Oct  9 09:47:52 compute-0 systemd[1]: Started Session 38 of User zuul.
Oct  9 09:47:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:53.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:53 compute-0 python3.9[151145]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:47:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:54.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:54 compute-0 python3.9[151302]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:55 compute-0 python3.9[151455]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:55.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:55 compute-0 python3.9[151607]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:56 compute-0 python3.9[151759]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:47:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:47:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:56.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:56 compute-0 python3.9[151913]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:57.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:57.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:57.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:47:57.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:47:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:57.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:57 compute-0 python3.9[152065]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:47:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:47:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:47:58 compute-0 python3.9[152220]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:47:58 compute-0 systemd[1]: Reloading.
Oct  9 09:47:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:58.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:58 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:58 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:47:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:47:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:47:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:47:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:59.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:47:59 compute-0 python3.9[152410]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:47:59 compute-0 network[152427]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:47:59 compute-0 network[152428]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:47:59 compute-0 network[152429]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:48:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:01.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:02 compute-0 python3.9[152705]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:02] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  9 09:48:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:02] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  9 09:48:02 compute-0 systemd[1]: Reloading.
Oct  9 09:48:02 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:02 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:48:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:02.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:03 compute-0 python3.9[152894]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:03.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:03 compute-0 python3.9[153046]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  9 09:48:04 compute-0 rsyslogd[1243]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:48:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:48:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:48:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:04.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:05.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:48:06 compute-0 podman[153056]: 2025-10-09 09:48:06.494866377 +0000 UTC m=+2.534533332 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:06.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:06 compute-0 podman[153107]: 2025-10-09 09:48:06.586430272 +0000 UTC m=+0.027933742 container create 801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6115] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct  9 09:48:06 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct  9 09:48:06 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct  9 09:48:06 compute-0 kernel: veth0: entered allmulticast mode
Oct  9 09:48:06 compute-0 kernel: veth0: entered promiscuous mode
Oct  9 09:48:06 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct  9 09:48:06 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6256] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6269] device (veth0): carrier: link connected
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6271] device (podman0): carrier: link connected
Oct  9 09:48:06 compute-0 systemd-udevd[153127]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:48:06 compute-0 systemd-udevd[153130]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6462] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6468] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6473] device (podman0): Activation: starting connection 'podman0' (7b36b55a-2625-4f93-bae7-e632137d002d)
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6475] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6477] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6478] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6480] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-0 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  9 09:48:06 compute-0 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  9 09:48:06 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 09:48:06 compute-0 podman[153107]: 2025-10-09 09:48:06.574808752 +0000 UTC m=+0.016312232 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:06 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6733] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6735] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.6740] device (podman0): Activation: successful, device activated.
Oct  9 09:48:06 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  9 09:48:06 compute-0 systemd[1]: Started libpod-conmon-801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335.scope.
Oct  9 09:48:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:48:06 compute-0 podman[153107]: 2025-10-09 09:48:06.86223417 +0000 UTC m=+0.303737651 container init 801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:48:06 compute-0 podman[153107]: 2025-10-09 09:48:06.867205069 +0000 UTC m=+0.308708539 container start 801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:06 compute-0 iscsid_config[153259]: iqn.1994-05.com.redhat:f6474cfb82aa#015
Oct  9 09:48:06 compute-0 podman[153107]: 2025-10-09 09:48:06.869275971 +0000 UTC m=+0.310779462 container attach 801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:06 compute-0 systemd[1]: libpod-801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335.scope: Deactivated successfully.
Oct  9 09:48:06 compute-0 conmon[153259]: conmon 801734e5791ddc2eaf3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335.scope/container/memory.events
Oct  9 09:48:06 compute-0 podman[153107]: 2025-10-09 09:48:06.870610652 +0000 UTC m=+0.312114122 container died 801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:48:06 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct  9 09:48:06 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct  9 09:48:06 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct  9 09:48:06 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct  9 09:48:06 compute-0 NetworkManager[982]: <info>  [1760003286.9106] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:48:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:07.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:07.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:07.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:07.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:07 compute-0 systemd[1]: run-netns-netns\x2dab9f5f91\x2d4ca9\x2ddfe5\x2d8c7a\x2da79c0d663ae6.mount: Deactivated successfully.
Oct  9 09:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335-userdata-shm.mount: Deactivated successfully.
Oct  9 09:48:07 compute-0 podman[153107]: 2025-10-09 09:48:07.139562377 +0000 UTC m=+0.581065837 container remove 801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:48:07 compute-0 systemd[1]: libpod-conmon-801734e5791ddc2eaf3a6f135872f2dafe923ff0a9e0b9271de1e74333dbe335.scope: Deactivated successfully.
Oct  9 09:48:07 compute-0 python3.9[153046]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct  9 09:48:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:07.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:07 compute-0 python3.9[153046]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  9 09:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-039d6bcfeead7e3861b915c2206981009aad1b641172429798a0b3c5a82612f7-merged.mount: Deactivated successfully.
Oct  9 09:48:07 compute-0 python3.9[153494]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:08 compute-0 python3.9[153618]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003287.3563366-317-94175569536769/.source.iscsi _original_basename=.1gyft98v follow=False checksum=ac6b40e130011e0549af5b2326625fad9199195c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:08.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:08 compute-0 python3.9[153771]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:09.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:09 compute-0 python3.9[153921]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:48:10.099 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:48:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:48:10.099 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:48:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:48:10.099 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:48:10 compute-0 python3.9[154101]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:10.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:10 compute-0 python3.9[154254]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:11.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:11 compute-0 python3.9[154406]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:11 compute-0 python3.9[154484]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:12 compute-0 python3.9[154636]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:12] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  9 09:48:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:12] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Oct  9 09:48:12 compute-0 podman[154687]: 2025-10-09 09:48:12.355662467 +0000 UTC m=+0.043562162 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  9 09:48:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:48:12 compute-0 python3.9[154729]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:12.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:13 compute-0 python3.9[154884]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:13 compute-0 python3.9[155036]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:14 compute-0 python3.9[155114]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:14.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:14 compute-0 python3.9[155268]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:15 compute-0 python3.9[155346]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:15.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:15 compute-0 python3.9[155498]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:15 compute-0 systemd[1]: Reloading.
Oct  9 09:48:15 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:15 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:16 compute-0 python3.9[155688]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:48:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:16.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:16 compute-0 python3.9[155767]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:16 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 09:48:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:17.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  9 09:48:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:17.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:17.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:17.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:17.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:17 compute-0 python3.9[155919]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:17 compute-0 python3.9[155997]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:18 compute-0 python3.9[156150]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:18 compute-0 systemd[1]: Reloading.
Oct  9 09:48:18 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:18 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:18.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:18 compute-0 systemd[1]: Starting Create netns directory...
Oct  9 09:48:18 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:48:18 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:48:18 compute-0 systemd[1]: Finished Create netns directory.
Oct  9 09:48:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:19.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:19 compute-0 python3.9[156343]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:48:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:48:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:48:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:48:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:48:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:48:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:48:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:48:19 compute-0 python3.9[156495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:20 compute-0 python3.9[156619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003299.5020275-779-55323100294144/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:20.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:21 compute-0 python3.9[156772]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:21.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:21 compute-0 python3.9[156924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:21 compute-0 podman[157019]: 2025-10-09 09:48:21.82110817 +0000 UTC m=+0.055116825 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  9 09:48:21 compute-0 python3.9[157061]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003301.2159748-854-132631783388434/.source.json _original_basename=.dlhx6qcc follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:22] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  9 09:48:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:22] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  9 09:48:22 compute-0 python3.9[157223]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:48:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:22.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:23.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:24 compute-0 python3.9[157652]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  9 09:48:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:24.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:24 compute-0 python3.9[157805]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:48:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:25 compute-0 python3.9[157957]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:48:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:48:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:26.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:27.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:27.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:27.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:27.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:27.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:27 compute-0 python3[158180]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:48:27 compute-0 podman[158237]: 2025-10-09 09:48:27.574225332 +0000 UTC m=+0.028834033 container create 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:48:27 compute-0 podman[158237]: 2025-10-09 09:48:27.561237165 +0000 UTC m=+0.015845876 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:27 compute-0 python3[158180]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:28 compute-0 python3.9[158416]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:48:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:48:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:48:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:28.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:48:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:48:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:48:28 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:28 compute-0 python3.9[158572]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:48:29 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:48:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:48:29 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:48:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:48:29 compute-0 python3.9[158648]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:48:29 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:48:29 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:48:29 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:48:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:48:29 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:48:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:48:29 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:48:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:29.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:29 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:48:29 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:48:29 compute-0 podman[158878]: 2025-10-09 09:48:29.590365719 +0000 UTC m=+0.029894672 container create 60e08f19ecce1865320d627622a0d64c05e3ef6089285594eba147192c02d254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 09:48:29 compute-0 systemd[1]: Started libpod-conmon-60e08f19ecce1865320d627622a0d64c05e3ef6089285594eba147192c02d254.scope.
Oct  9 09:48:29 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:48:29 compute-0 podman[158878]: 2025-10-09 09:48:29.639075212 +0000 UTC m=+0.078604175 container init 60e08f19ecce1865320d627622a0d64c05e3ef6089285594eba147192c02d254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:48:29 compute-0 podman[158878]: 2025-10-09 09:48:29.644113752 +0000 UTC m=+0.083642715 container start 60e08f19ecce1865320d627622a0d64c05e3ef6089285594eba147192c02d254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:48:29 compute-0 podman[158878]: 2025-10-09 09:48:29.645107856 +0000 UTC m=+0.084636809 container attach 60e08f19ecce1865320d627622a0d64c05e3ef6089285594eba147192c02d254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 09:48:29 compute-0 flamboyant_banzai[158893]: 167 167
Oct  9 09:48:29 compute-0 systemd[1]: libpod-60e08f19ecce1865320d627622a0d64c05e3ef6089285594eba147192c02d254.scope: Deactivated successfully.
Oct  9 09:48:29 compute-0 podman[158878]: 2025-10-09 09:48:29.649069986 +0000 UTC m=+0.088598939 container died 60e08f19ecce1865320d627622a0d64c05e3ef6089285594eba147192c02d254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 09:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e93928195877fb0e532951acec537d59a9846c559a5afc5f31072fa720a06aa4-merged.mount: Deactivated successfully.
Oct  9 09:48:29 compute-0 podman[158878]: 2025-10-09 09:48:29.67368892 +0000 UTC m=+0.113217874 container remove 60e08f19ecce1865320d627622a0d64c05e3ef6089285594eba147192c02d254 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:48:29 compute-0 podman[158878]: 2025-10-09 09:48:29.578970745 +0000 UTC m=+0.018499719 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:48:29 compute-0 systemd[1]: libpod-conmon-60e08f19ecce1865320d627622a0d64c05e3ef6089285594eba147192c02d254.scope: Deactivated successfully.
Oct  9 09:48:29 compute-0 python3.9[158881]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003309.2499313-1118-40352740779163/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:29 compute-0 podman[158915]: 2025-10-09 09:48:29.793697989 +0000 UTC m=+0.025698381 container create c94f00e3c60c220cc59cd827ec2a12db344b69bbb92717fbe33ec8849d0ee356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:48:29 compute-0 systemd[1]: Started libpod-conmon-c94f00e3c60c220cc59cd827ec2a12db344b69bbb92717fbe33ec8849d0ee356.scope.
Oct  9 09:48:29 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a11497897422701c3585be8577db990c59780c20abad7dd3ab46010a99f09d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a11497897422701c3585be8577db990c59780c20abad7dd3ab46010a99f09d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a11497897422701c3585be8577db990c59780c20abad7dd3ab46010a99f09d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a11497897422701c3585be8577db990c59780c20abad7dd3ab46010a99f09d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a11497897422701c3585be8577db990c59780c20abad7dd3ab46010a99f09d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:29 compute-0 podman[158915]: 2025-10-09 09:48:29.84823561 +0000 UTC m=+0.080236003 container init c94f00e3c60c220cc59cd827ec2a12db344b69bbb92717fbe33ec8849d0ee356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 09:48:29 compute-0 podman[158915]: 2025-10-09 09:48:29.854380896 +0000 UTC m=+0.086381289 container start c94f00e3c60c220cc59cd827ec2a12db344b69bbb92717fbe33ec8849d0ee356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_neumann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct  9 09:48:29 compute-0 podman[158915]: 2025-10-09 09:48:29.856831957 +0000 UTC m=+0.088832350 container attach c94f00e3c60c220cc59cd827ec2a12db344b69bbb92717fbe33ec8849d0ee356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_neumann, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:48:29 compute-0 podman[158915]: 2025-10-09 09:48:29.783810467 +0000 UTC m=+0.015810880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:48:30 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct  9 09:48:30 compute-0 infallible_neumann[158951]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:48:30 compute-0 infallible_neumann[158951]: --> All data devices are unavailable
Oct  9 09:48:30 compute-0 systemd[1]: libpod-c94f00e3c60c220cc59cd827ec2a12db344b69bbb92717fbe33ec8849d0ee356.scope: Deactivated successfully.
Oct  9 09:48:30 compute-0 podman[158915]: 2025-10-09 09:48:30.131572825 +0000 UTC m=+0.363573218 container died c94f00e3c60c220cc59cd827ec2a12db344b69bbb92717fbe33ec8849d0ee356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-25a11497897422701c3585be8577db990c59780c20abad7dd3ab46010a99f09d-merged.mount: Deactivated successfully.
Oct  9 09:48:30 compute-0 podman[158915]: 2025-10-09 09:48:30.156445408 +0000 UTC m=+0.388445801 container remove c94f00e3c60c220cc59cd827ec2a12db344b69bbb92717fbe33ec8849d0ee356 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:48:30 compute-0 systemd[1]: libpod-conmon-c94f00e3c60c220cc59cd827ec2a12db344b69bbb92717fbe33ec8849d0ee356.scope: Deactivated successfully.
Oct  9 09:48:30 compute-0 python3.9[159009]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:48:30 compute-0 systemd[1]: Reloading.
Oct  9 09:48:30 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:30 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:30 compute-0 ceph-mon[4497]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct  9 09:48:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:30.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:30 compute-0 podman[159250]: 2025-10-09 09:48:30.804807536 +0000 UTC m=+0.028750905 container create a5f1285bb39969080ebcbc8553cc75a07f8625330f792fcc4274dddc2245cb12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_galois, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:48:30 compute-0 systemd[1]: Started libpod-conmon-a5f1285bb39969080ebcbc8553cc75a07f8625330f792fcc4274dddc2245cb12.scope.
Oct  9 09:48:30 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:48:30 compute-0 podman[159250]: 2025-10-09 09:48:30.860680166 +0000 UTC m=+0.084623544 container init a5f1285bb39969080ebcbc8553cc75a07f8625330f792fcc4274dddc2245cb12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:48:30 compute-0 podman[159250]: 2025-10-09 09:48:30.866520486 +0000 UTC m=+0.090463864 container start a5f1285bb39969080ebcbc8553cc75a07f8625330f792fcc4274dddc2245cb12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:48:30 compute-0 podman[159250]: 2025-10-09 09:48:30.867597096 +0000 UTC m=+0.091540465 container attach a5f1285bb39969080ebcbc8553cc75a07f8625330f792fcc4274dddc2245cb12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:48:30 compute-0 gifted_galois[159263]: 167 167
Oct  9 09:48:30 compute-0 systemd[1]: libpod-a5f1285bb39969080ebcbc8553cc75a07f8625330f792fcc4274dddc2245cb12.scope: Deactivated successfully.
Oct  9 09:48:30 compute-0 conmon[159263]: conmon a5f1285bb39969080ebc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5f1285bb39969080ebcbc8553cc75a07f8625330f792fcc4274dddc2245cb12.scope/container/memory.events
Oct  9 09:48:30 compute-0 podman[159250]: 2025-10-09 09:48:30.793366026 +0000 UTC m=+0.017309414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:48:30 compute-0 podman[159268]: 2025-10-09 09:48:30.901311049 +0000 UTC m=+0.017589484 container died a5f1285bb39969080ebcbc8553cc75a07f8625330f792fcc4274dddc2245cb12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_galois, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 09:48:30 compute-0 python3.9[159220]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7f436fd474950a199107ad99a56a5a971ab7cd6a36f87bc6ad85a18a22b015d-merged.mount: Deactivated successfully.
Oct  9 09:48:30 compute-0 podman[159268]: 2025-10-09 09:48:30.926307525 +0000 UTC m=+0.042585940 container remove a5f1285bb39969080ebcbc8553cc75a07f8625330f792fcc4274dddc2245cb12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_galois, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:48:30 compute-0 systemd[1]: libpod-conmon-a5f1285bb39969080ebcbc8553cc75a07f8625330f792fcc4274dddc2245cb12.scope: Deactivated successfully.
Oct  9 09:48:30 compute-0 systemd[1]: Reloading.
Oct  9 09:48:31 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:31 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:31 compute-0 podman[159321]: 2025-10-09 09:48:31.070774213 +0000 UTC m=+0.029301304 container create efb42a1d8fc78b930c836d92f2bd2626c7fb85f5717052fe1f09d9744dbf4b04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poincare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:48:31 compute-0 podman[159321]: 2025-10-09 09:48:31.059695656 +0000 UTC m=+0.018222747 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:48:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:48:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:31.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:31 compute-0 systemd[1]: Started libpod-conmon-efb42a1d8fc78b930c836d92f2bd2626c7fb85f5717052fe1f09d9744dbf4b04.scope.
Oct  9 09:48:31 compute-0 systemd[1]: Starting iscsid container...
Oct  9 09:48:31 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebdf0a404cf6612baed01aaf1d666c842c29ec03a2e9012369e103d05d79134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebdf0a404cf6612baed01aaf1d666c842c29ec03a2e9012369e103d05d79134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebdf0a404cf6612baed01aaf1d666c842c29ec03a2e9012369e103d05d79134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebdf0a404cf6612baed01aaf1d666c842c29ec03a2e9012369e103d05d79134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-0 podman[159321]: 2025-10-09 09:48:31.245004225 +0000 UTC m=+0.203531326 container init efb42a1d8fc78b930c836d92f2bd2626c7fb85f5717052fe1f09d9744dbf4b04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poincare, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:48:31 compute-0 podman[159321]: 2025-10-09 09:48:31.250079062 +0000 UTC m=+0.208606143 container start efb42a1d8fc78b930c836d92f2bd2626c7fb85f5717052fe1f09d9744dbf4b04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:48:31 compute-0 podman[159321]: 2025-10-09 09:48:31.255337126 +0000 UTC m=+0.213864207 container attach efb42a1d8fc78b930c836d92f2bd2626c7fb85f5717052fe1f09d9744dbf4b04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:48:31 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0073639c1c90a99dc7d0a51fe6f60dae603f25c06227ba7b362adad526aab756/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0073639c1c90a99dc7d0a51fe6f60dae603f25c06227ba7b362adad526aab756/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0073639c1c90a99dc7d0a51fe6f60dae603f25c06227ba7b362adad526aab756/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9.
Oct  9 09:48:31 compute-0 podman[159341]: 2025-10-09 09:48:31.321708497 +0000 UTC m=+0.089764786 container init 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:31 compute-0 iscsid[159356]: + sudo -E kolla_set_configs
Oct  9 09:48:31 compute-0 podman[159341]: 2025-10-09 09:48:31.339598005 +0000 UTC m=+0.107654274 container start 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  9 09:48:31 compute-0 podman[159341]: iscsid
Oct  9 09:48:31 compute-0 systemd[1]: Started iscsid container.
Oct  9 09:48:31 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct  9 09:48:31 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  9 09:48:31 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  9 09:48:31 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct  9 09:48:31 compute-0 podman[159362]: 2025-10-09 09:48:31.403741848 +0000 UTC m=+0.055777280 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  9 09:48:31 compute-0 systemd[1]: 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9-d3cdca431c7fe09.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:48:31 compute-0 systemd[1]: 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9-d3cdca431c7fe09.service: Failed with result 'exit-code'.
Oct  9 09:48:31 compute-0 cool_poincare[159339]: {
Oct  9 09:48:31 compute-0 cool_poincare[159339]:    "1": [
Oct  9 09:48:31 compute-0 cool_poincare[159339]:        {
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "devices": [
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "/dev/loop3"
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            ],
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "lv_name": "ceph_lv0",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "lv_size": "21470642176",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "name": "ceph_lv0",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "tags": {
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.cluster_name": "ceph",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.crush_device_class": "",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.encrypted": "0",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.osd_id": "1",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.type": "block",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.vdo": "0",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:                "ceph.with_tpm": "0"
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            },
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "type": "block",
Oct  9 09:48:31 compute-0 cool_poincare[159339]:            "vg_name": "ceph_vg0"
Oct  9 09:48:31 compute-0 cool_poincare[159339]:        }
Oct  9 09:48:31 compute-0 cool_poincare[159339]:    ]
Oct  9 09:48:31 compute-0 cool_poincare[159339]: }
Oct  9 09:48:31 compute-0 systemd[159376]: Queued start job for default target Main User Target.
Oct  9 09:48:31 compute-0 systemd[159376]: Created slice User Application Slice.
Oct  9 09:48:31 compute-0 systemd[159376]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  9 09:48:31 compute-0 systemd[159376]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:48:31 compute-0 systemd[159376]: Reached target Paths.
Oct  9 09:48:31 compute-0 systemd[159376]: Reached target Timers.
Oct  9 09:48:31 compute-0 systemd[159376]: Starting D-Bus User Message Bus Socket...
Oct  9 09:48:31 compute-0 systemd[159376]: Starting Create User's Volatile Files and Directories...
Oct  9 09:48:31 compute-0 systemd[159376]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:48:31 compute-0 systemd[159376]: Reached target Sockets.
Oct  9 09:48:31 compute-0 systemd[159376]: Finished Create User's Volatile Files and Directories.
Oct  9 09:48:31 compute-0 systemd[159376]: Reached target Basic System.
Oct  9 09:48:31 compute-0 systemd[159376]: Reached target Main User Target.
Oct  9 09:48:31 compute-0 systemd[159376]: Startup finished in 98ms.
Oct  9 09:48:31 compute-0 systemd[1]: Started User Manager for UID 0.
Oct  9 09:48:31 compute-0 systemd[1]: Started Session c3 of User root.
Oct  9 09:48:31 compute-0 systemd[1]: libpod-efb42a1d8fc78b930c836d92f2bd2626c7fb85f5717052fe1f09d9744dbf4b04.scope: Deactivated successfully.
Oct  9 09:48:31 compute-0 podman[159321]: 2025-10-09 09:48:31.515464382 +0000 UTC m=+0.473991473 container died efb42a1d8fc78b930c836d92f2bd2626c7fb85f5717052fe1f09d9744dbf4b04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  9 09:48:31 compute-0 podman[159321]: 2025-10-09 09:48:31.542517917 +0000 UTC m=+0.501044999 container remove efb42a1d8fc78b930c836d92f2bd2626c7fb85f5717052fe1f09d9744dbf4b04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Oct  9 09:48:31 compute-0 systemd[1]: libpod-conmon-efb42a1d8fc78b930c836d92f2bd2626c7fb85f5717052fe1f09d9744dbf4b04.scope: Deactivated successfully.
Oct  9 09:48:31 compute-0 iscsid[159356]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:48:31 compute-0 iscsid[159356]: INFO:__main__:Validating config file
Oct  9 09:48:31 compute-0 iscsid[159356]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:48:31 compute-0 iscsid[159356]: INFO:__main__:Writing out command to execute
Oct  9 09:48:31 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  9 09:48:31 compute-0 iscsid[159356]: ++ cat /run_command
Oct  9 09:48:31 compute-0 iscsid[159356]: + CMD='/usr/sbin/iscsid -f'
Oct  9 09:48:31 compute-0 iscsid[159356]: + ARGS=
Oct  9 09:48:31 compute-0 iscsid[159356]: + sudo kolla_copy_cacerts
Oct  9 09:48:31 compute-0 systemd[1]: Started Session c4 of User root.
Oct  9 09:48:31 compute-0 iscsid[159356]: + [[ ! -n '' ]]
Oct  9 09:48:31 compute-0 iscsid[159356]: + . kolla_extend_start
Oct  9 09:48:31 compute-0 iscsid[159356]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  9 09:48:31 compute-0 iscsid[159356]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  9 09:48:31 compute-0 iscsid[159356]: + umask 0022
Oct  9 09:48:31 compute-0 iscsid[159356]: + exec /usr/sbin/iscsid -f
Oct  9 09:48:31 compute-0 iscsid[159356]: Running command: '/usr/sbin/iscsid -f'
Oct  9 09:48:31 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  9 09:48:31 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct  9 09:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ebdf0a404cf6612baed01aaf1d666c842c29ec03a2e9012369e103d05d79134-merged.mount: Deactivated successfully.
Oct  9 09:48:31 compute-0 python3.9[159622]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:31 compute-0 podman[159654]: 2025-10-09 09:48:31.98744364 +0000 UTC m=+0.028403782 container create 83dca7f24690ed91e29ae0c2068fb80282029ee7d0114ce6d86fbe9e5cc825e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:48:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:32 compute-0 systemd[1]: Started libpod-conmon-83dca7f24690ed91e29ae0c2068fb80282029ee7d0114ce6d86fbe9e5cc825e3.scope.
Oct  9 09:48:32 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:48:32 compute-0 podman[159654]: 2025-10-09 09:48:32.038705797 +0000 UTC m=+0.079665949 container init 83dca7f24690ed91e29ae0c2068fb80282029ee7d0114ce6d86fbe9e5cc825e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 09:48:32 compute-0 podman[159654]: 2025-10-09 09:48:32.044507244 +0000 UTC m=+0.085467386 container start 83dca7f24690ed91e29ae0c2068fb80282029ee7d0114ce6d86fbe9e5cc825e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wright, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:48:32 compute-0 podman[159654]: 2025-10-09 09:48:32.045722816 +0000 UTC m=+0.086682959 container attach 83dca7f24690ed91e29ae0c2068fb80282029ee7d0114ce6d86fbe9e5cc825e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wright, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:48:32 compute-0 cool_wright[159692]: 167 167
Oct  9 09:48:32 compute-0 systemd[1]: libpod-83dca7f24690ed91e29ae0c2068fb80282029ee7d0114ce6d86fbe9e5cc825e3.scope: Deactivated successfully.
Oct  9 09:48:32 compute-0 podman[159654]: 2025-10-09 09:48:32.048275629 +0000 UTC m=+0.089235781 container died 83dca7f24690ed91e29ae0c2068fb80282029ee7d0114ce6d86fbe9e5cc825e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-28b10fd11da98ec299d1441e5988d9ac2388c70be70bdb3755f31a884977ee16-merged.mount: Deactivated successfully.
Oct  9 09:48:32 compute-0 podman[159654]: 2025-10-09 09:48:32.070127587 +0000 UTC m=+0.111087730 container remove 83dca7f24690ed91e29ae0c2068fb80282029ee7d0114ce6d86fbe9e5cc825e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 09:48:32 compute-0 podman[159654]: 2025-10-09 09:48:31.976109442 +0000 UTC m=+0.017069604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:48:32 compute-0 systemd[1]: libpod-conmon-83dca7f24690ed91e29ae0c2068fb80282029ee7d0114ce6d86fbe9e5cc825e3.scope: Deactivated successfully.
Oct  9 09:48:32 compute-0 podman[159742]: 2025-10-09 09:48:32.196389563 +0000 UTC m=+0.030645217 container create d633d54057b213dffbc015067e4d1e361080298792f4d1b63641182bed42d768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cartwright, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 09:48:32 compute-0 systemd[1]: Started libpod-conmon-d633d54057b213dffbc015067e4d1e361080298792f4d1b63641182bed42d768.scope.
Oct  9 09:48:32 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c14dd280188c6c48e0036d67252fd1a56cd4bf90ab811d26bad97ef62ae7204/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c14dd280188c6c48e0036d67252fd1a56cd4bf90ab811d26bad97ef62ae7204/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c14dd280188c6c48e0036d67252fd1a56cd4bf90ab811d26bad97ef62ae7204/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c14dd280188c6c48e0036d67252fd1a56cd4bf90ab811d26bad97ef62ae7204/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:32 compute-0 podman[159742]: 2025-10-09 09:48:32.249448156 +0000 UTC m=+0.083703830 container init d633d54057b213dffbc015067e4d1e361080298792f4d1b63641182bed42d768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cartwright, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:48:32 compute-0 podman[159742]: 2025-10-09 09:48:32.255266626 +0000 UTC m=+0.089522280 container start d633d54057b213dffbc015067e4d1e361080298792f4d1b63641182bed42d768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:48:32 compute-0 podman[159742]: 2025-10-09 09:48:32.256350379 +0000 UTC m=+0.090606043 container attach d633d54057b213dffbc015067e4d1e361080298792f4d1b63641182bed42d768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cartwright, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:48:32 compute-0 podman[159742]: 2025-10-09 09:48:32.185128051 +0000 UTC m=+0.019383725 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:48:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:32] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  9 09:48:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:32] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Oct  9 09:48:32 compute-0 python3.9[159859]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:32.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:32 compute-0 lvm[159954]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:48:32 compute-0 lvm[159954]: VG ceph_vg0 finished
Oct  9 09:48:32 compute-0 friendly_cartwright[159797]: {}
Oct  9 09:48:32 compute-0 systemd[1]: libpod-d633d54057b213dffbc015067e4d1e361080298792f4d1b63641182bed42d768.scope: Deactivated successfully.
Oct  9 09:48:32 compute-0 podman[159742]: 2025-10-09 09:48:32.753208853 +0000 UTC m=+0.587464507 container died d633d54057b213dffbc015067e4d1e361080298792f4d1b63641182bed42d768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c14dd280188c6c48e0036d67252fd1a56cd4bf90ab811d26bad97ef62ae7204-merged.mount: Deactivated successfully.
Oct  9 09:48:32 compute-0 podman[159742]: 2025-10-09 09:48:32.779823931 +0000 UTC m=+0.614079586 container remove d633d54057b213dffbc015067e4d1e361080298792f4d1b63641182bed42d768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:48:32 compute-0 systemd[1]: libpod-conmon-d633d54057b213dffbc015067e4d1e361080298792f4d1b63641182bed42d768.scope: Deactivated successfully.
Oct  9 09:48:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:48:32 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:48:32 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:48:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:33.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:33 compute-0 python3.9[160117]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:48:33 compute-0 network[160134]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:48:33 compute-0 network[160135]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:48:33 compute-0 network[160136]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:48:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Oct  9 09:48:34 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:48:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:48:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:34.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:48:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:35.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:48:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:36.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:48:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:37.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:37.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:37.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:37.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:48:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:37.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:38 compute-0 python3.9[160415]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:48:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:38.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:38 compute-0 python3.9[160569]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  9 09:48:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:48:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:39.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:39 compute-0 python3.9[160725]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:40 compute-0 python3.9[160848]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003319.0731301-1340-218127830497/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:48:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:40.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:48:40 compute-0 python3.9[161001]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:48:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:41.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:41 compute-0 python3.9[161154]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:48:41 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  9 09:48:41 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct  9 09:48:41 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct  9 09:48:41 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct  9 09:48:41 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct  9 09:48:41 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct  9 09:48:41 compute-0 systemd[159376]: Activating special unit Exit the Session...
Oct  9 09:48:41 compute-0 systemd[159376]: Stopped target Main User Target.
Oct  9 09:48:41 compute-0 systemd[159376]: Stopped target Basic System.
Oct  9 09:48:41 compute-0 systemd[159376]: Stopped target Paths.
Oct  9 09:48:41 compute-0 systemd[159376]: Stopped target Sockets.
Oct  9 09:48:41 compute-0 systemd[159376]: Stopped target Timers.
Oct  9 09:48:41 compute-0 systemd[159376]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:48:41 compute-0 systemd[159376]: Closed D-Bus User Message Bus Socket.
Oct  9 09:48:41 compute-0 systemd[159376]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:48:41 compute-0 systemd[159376]: Removed slice User Application Slice.
Oct  9 09:48:41 compute-0 systemd[159376]: Reached target Shutdown.
Oct  9 09:48:41 compute-0 systemd[159376]: Finished Exit the Session.
Oct  9 09:48:41 compute-0 systemd[159376]: Reached target Exit the Session.
Oct  9 09:48:41 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct  9 09:48:41 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct  9 09:48:41 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  9 09:48:41 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  9 09:48:41 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  9 09:48:41 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  9 09:48:41 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct  9 09:48:41 compute-0 python3.9[161311]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:42] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Oct  9 09:48:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:42] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Oct  9 09:48:42 compute-0 podman[161436]: 2025-10-09 09:48:42.480647018 +0000 UTC m=+0.050020847 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:48:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:42.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:42 compute-0 python3.9[161480]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:43.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:43 compute-0 python3.9[161633]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:43 compute-0 python3.9[161785]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:44 compute-0 python3.9[161909]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003323.4117105-1514-2550876507402/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:48:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:48:44 compute-0 python3.9[162062]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:48:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:45.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:45 compute-0 python3.9[162215]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:46 compute-0 python3.9[162367]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:46.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:46 compute-0 python3.9[162521]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:47.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:47.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:47.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:47.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:48:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:48:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:47.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:48:47 compute-0 python3.9[162673]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:47 compute-0 python3.9[162825]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:48 compute-0 python3.9[162978]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:48.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:48 compute-0 python3.9[163131]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:49 compute-0 python3.9[163283]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:49.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:48:49
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', '.rgw.root', 'vms', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes']
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:48:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:48:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:48:49 compute-0 python3.9[163437]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:48:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:48:50 compute-0 python3.9[163615]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:50.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:50 compute-0 python3.9[163768]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:48:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:51.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:51 compute-0 python3.9[163846]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:51 compute-0 python3.9[163998]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:51 compute-0 podman[164048]: 2025-10-09 09:48:51.945471278 +0000 UTC m=+0.061996796 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 09:48:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:52 compute-0 python3.9[164093]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:52] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Oct  9 09:48:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:48:52] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Oct  9 09:48:52 compute-0 python3.9[164252]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:52.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:53 compute-0 python3.9[164405]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:53.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:53 compute-0 python3.9[164483]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:54 compute-0 python3.9[164635]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:54 compute-0 python3.9[164714]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:54.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:55 compute-0 python3.9[164867]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:55 compute-0 systemd[1]: Reloading.
Oct  9 09:48:55 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:55 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:55.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:55 compute-0 python3.9[165056]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:56 compute-0 python3.9[165134]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:56.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:56 compute-0 python3.9[165288]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:48:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:57 compute-0 python3.9[165366]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:57.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:57.013Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:57.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:48:57.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:48:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:48:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:48:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:57.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:48:57 compute-0 python3.9[165518]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:57 compute-0 systemd[1]: Reloading.
Oct  9 09:48:57 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:57 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:58 compute-0 systemd[1]: Starting Create netns directory...
Oct  9 09:48:58 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:48:58 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:48:58 compute-0 systemd[1]: Finished Create netns directory.
Oct  9 09:48:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:58.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:58 compute-0 python3.9[165713]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:48:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:48:59 compute-0 python3.9[165865]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:48:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:59.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:59 compute-0 python3.9[165988]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003338.8677964-2135-106102938079380/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:00 compute-0 python3.9[166141]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:00.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:00 compute-0 python3.9[166294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:49:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:49:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:01.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:01 compute-0 python3.9[166417]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003340.583949-2210-148760970480193/.source.json _original_basename=.omxc12ha follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:01 compute-0 podman[166449]: 2025-10-09 09:49:01.603690712 +0000 UTC m=+0.047122303 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2)
Oct  9 09:49:01 compute-0 python3.9[166585]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:02] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Oct  9 09:49:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:02] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Oct  9 09:49:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:49:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:02.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:49:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:03.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:03 compute-0 python3.9[167014]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  9 09:49:04 compute-0 python3.9[167167]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:49:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:49:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:49:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:04.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:05 compute-0 python3.9[167320]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:49:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:05.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:06.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:06 compute-0 python3[167492]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:49:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:07.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:07.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:07.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:07.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:07 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:49:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:07.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:08 compute-0 podman[167504]: 2025-10-09 09:49:08.51118907 +0000 UTC m=+1.779769840 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  9 09:49:08 compute-0 podman[167552]: 2025-10-09 09:49:08.60833394 +0000 UTC m=+0.028910087 container create 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  9 09:49:08 compute-0 podman[167552]: 2025-10-09 09:49:08.594831303 +0000 UTC m=+0.015407470 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  9 09:49:08 compute-0 python3[167492]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  9 09:49:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:08.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:09 compute-0 python3.9[167731]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:49:09 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:09.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:09 compute-0 python3.9[167885]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:49:10.099 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:49:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:49:10.099 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:49:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:49:10.100 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:49:10 compute-0 python3.9[167961]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:49:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:10.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:10 compute-0 python3.9[168139]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003350.177145-2474-166282587007524/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:11 compute-0 python3.9[168215]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:49:11 compute-0 systemd[1]: Reloading.
Oct  9 09:49:11 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:11 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:49:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:11.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:11 compute-0 python3.9[168326]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:11 compute-0 systemd[1]: Reloading.
Oct  9 09:49:11 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:11 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:12 compute-0 systemd[1]: Starting multipathd container...
Oct  9 09:49:12 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074bb495d54b0b9e69d40fd894bb0e95743f67c8ceb8f12c38d0537eb3cf118/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074bb495d54b0b9e69d40fd894bb0e95743f67c8ceb8f12c38d0537eb3cf118/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:12 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e.
Oct  9 09:49:12 compute-0 podman[168367]: 2025-10-09 09:49:12.214827023 +0000 UTC m=+0.088353087 container init 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:49:12 compute-0 multipathd[168379]: + sudo -E kolla_set_configs
Oct  9 09:49:12 compute-0 podman[168367]: 2025-10-09 09:49:12.235899351 +0000 UTC m=+0.109425415 container start 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  9 09:49:12 compute-0 podman[168367]: multipathd
Oct  9 09:49:12 compute-0 systemd[1]: Started multipathd container.
Oct  9 09:49:12 compute-0 multipathd[168379]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:49:12 compute-0 multipathd[168379]: INFO:__main__:Validating config file
Oct  9 09:49:12 compute-0 multipathd[168379]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:49:12 compute-0 multipathd[168379]: INFO:__main__:Writing out command to execute
Oct  9 09:49:12 compute-0 multipathd[168379]: ++ cat /run_command
Oct  9 09:49:12 compute-0 multipathd[168379]: + CMD='/usr/sbin/multipathd -d'
Oct  9 09:49:12 compute-0 multipathd[168379]: + ARGS=
Oct  9 09:49:12 compute-0 multipathd[168379]: + sudo kolla_copy_cacerts
Oct  9 09:49:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:12] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:49:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:12] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:49:12 compute-0 multipathd[168379]: + [[ ! -n '' ]]
Oct  9 09:49:12 compute-0 multipathd[168379]: + . kolla_extend_start
Oct  9 09:49:12 compute-0 multipathd[168379]: Running command: '/usr/sbin/multipathd -d'
Oct  9 09:49:12 compute-0 multipathd[168379]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  9 09:49:12 compute-0 multipathd[168379]: + umask 0022
Oct  9 09:49:12 compute-0 multipathd[168379]: + exec /usr/sbin/multipathd -d
Oct  9 09:49:12 compute-0 multipathd[168379]: 1036.878241 | --------start up--------
Oct  9 09:49:12 compute-0 multipathd[168379]: 1036.878253 | read /etc/multipath.conf
Oct  9 09:49:12 compute-0 multipathd[168379]: 1036.882155 | path checkers start up
Oct  9 09:49:12 compute-0 podman[168386]: 2025-10-09 09:49:12.319835759 +0000 UTC m=+0.077247288 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:49:12 compute-0 systemd[1]: 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e-46ab1183d22f8a90.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:49:12 compute-0 systemd[1]: 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e-46ab1183d22f8a90.service: Failed with result 'exit-code'.
Oct  9 09:49:12 compute-0 podman[168516]: 2025-10-09 09:49:12.596686866 +0000 UTC m=+0.042027027 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:49:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:12.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:12 compute-0 python3.9[168582]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:49:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:49:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:13.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:49:13 compute-0 python3.9[168736]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:13 compute-0 python3.9[168897]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:49:14 compute-0 systemd[1]: Stopping multipathd container...
Oct  9 09:49:14 compute-0 multipathd[168379]: 1038.637094 | exit (signal)
Oct  9 09:49:14 compute-0 multipathd[168379]: 1038.637149 | --------shut down-------
Oct  9 09:49:14 compute-0 systemd[1]: libpod-6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e.scope: Deactivated successfully.
Oct  9 09:49:14 compute-0 conmon[168379]: conmon 6a0b51670cf69b579822 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e.scope/container/memory.events
Oct  9 09:49:14 compute-0 podman[168901]: 2025-10-09 09:49:14.085762269 +0000 UTC m=+0.054181190 container died 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:49:14 compute-0 systemd[1]: 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e-46ab1183d22f8a90.timer: Deactivated successfully.
Oct  9 09:49:14 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e.
Oct  9 09:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e-userdata-shm.mount: Deactivated successfully.
Oct  9 09:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6074bb495d54b0b9e69d40fd894bb0e95743f67c8ceb8f12c38d0537eb3cf118-merged.mount: Deactivated successfully.
Oct  9 09:49:14 compute-0 podman[168901]: 2025-10-09 09:49:14.165323469 +0000 UTC m=+0.133742390 container cleanup 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2)
Oct  9 09:49:14 compute-0 podman[168901]: multipathd
Oct  9 09:49:14 compute-0 podman[168933]: multipathd
Oct  9 09:49:14 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  9 09:49:14 compute-0 systemd[1]: Stopped multipathd container.
Oct  9 09:49:14 compute-0 systemd[1]: Starting multipathd container...
Oct  9 09:49:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:49:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074bb495d54b0b9e69d40fd894bb0e95743f67c8ceb8f12c38d0537eb3cf118/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6074bb495d54b0b9e69d40fd894bb0e95743f67c8ceb8f12c38d0537eb3cf118/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e.
Oct  9 09:49:14 compute-0 podman[168942]: 2025-10-09 09:49:14.347744493 +0000 UTC m=+0.090270221 container init 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 09:49:14 compute-0 multipathd[168954]: + sudo -E kolla_set_configs
Oct  9 09:49:14 compute-0 podman[168942]: multipathd
Oct  9 09:49:14 compute-0 podman[168942]: 2025-10-09 09:49:14.367951891 +0000 UTC m=+0.110477598 container start 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, tcib_managed=true)
Oct  9 09:49:14 compute-0 systemd[1]: Started multipathd container.
Oct  9 09:49:14 compute-0 multipathd[168954]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:49:14 compute-0 multipathd[168954]: INFO:__main__:Validating config file
Oct  9 09:49:14 compute-0 multipathd[168954]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:49:14 compute-0 multipathd[168954]: INFO:__main__:Writing out command to execute
Oct  9 09:49:14 compute-0 multipathd[168954]: ++ cat /run_command
Oct  9 09:49:14 compute-0 multipathd[168954]: + CMD='/usr/sbin/multipathd -d'
Oct  9 09:49:14 compute-0 multipathd[168954]: + ARGS=
Oct  9 09:49:14 compute-0 multipathd[168954]: + sudo kolla_copy_cacerts
Oct  9 09:49:14 compute-0 podman[168961]: 2025-10-09 09:49:14.436852021 +0000 UTC m=+0.057514194 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Oct  9 09:49:14 compute-0 systemd[1]: 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e-6f1bfe63ca04e12.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:49:14 compute-0 systemd[1]: 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e-6f1bfe63ca04e12.service: Failed with result 'exit-code'.
Oct  9 09:49:14 compute-0 multipathd[168954]: + [[ ! -n '' ]]
Oct  9 09:49:14 compute-0 multipathd[168954]: + . kolla_extend_start
Oct  9 09:49:14 compute-0 multipathd[168954]: Running command: '/usr/sbin/multipathd -d'
Oct  9 09:49:14 compute-0 multipathd[168954]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  9 09:49:14 compute-0 multipathd[168954]: + umask 0022
Oct  9 09:49:14 compute-0 multipathd[168954]: + exec /usr/sbin/multipathd -d
Oct  9 09:49:14 compute-0 multipathd[168954]: 1039.040879 | --------start up--------
Oct  9 09:49:14 compute-0 multipathd[168954]: 1039.040900 | read /etc/multipath.conf
Oct  9 09:49:14 compute-0 multipathd[168954]: 1039.046871 | path checkers start up
Oct  9 09:49:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:14.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:14 compute-0 python3.9[169144]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:49:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:15.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:49:15 compute-0 python3.9[169296]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:49:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:16 compute-0 python3.9[169449]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  9 09:49:16 compute-0 kernel: Key type psk registered
Oct  9 09:49:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:16.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:17.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:17.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:17.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:17.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:17 compute-0 python3.9[169613]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:49:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:49:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:17.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:17 compute-0 python3.9[169736]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003356.616368-2714-10858378009554/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:18 compute-0 python3.9[169888]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:18.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:18 compute-0 python3.9[170041]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:49:18 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  9 09:49:18 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct  9 09:49:18 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct  9 09:49:18 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct  9 09:49:18 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct  9 09:49:18 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  9 09:49:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:19 compute-0 python3.9[170199]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:49:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:49:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:49:19 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  9 09:49:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:49:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:49:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:49:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:49:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:49:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:49:20 compute-0 python3.9[170285]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:49:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:49:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:20.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:49:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:49:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:21.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:22] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:49:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:22] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:49:22 compute-0 podman[170290]: 2025-10-09 09:49:22.620006185 +0000 UTC m=+0.058533096 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:49:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:49:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:22.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:49:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:23.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:24.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:25 compute-0 systemd[1]: Reloading.
Oct  9 09:49:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:25.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:25 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:25 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:25 compute-0 systemd[1]: Reloading.
Oct  9 09:49:25 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:25 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:25 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 09:49:25 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 09:49:25 compute-0 lvm[170420]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:49:25 compute-0 lvm[170420]: VG ceph_vg0 finished
Oct  9 09:49:25 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 09:49:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct  9 09:49:26 compute-0 systemd[1]: Reloading.
Oct  9 09:49:26 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:26 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:26 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 09:49:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:26.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:26 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 09:49:26 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct  9 09:49:26 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.019s CPU time.
Oct  9 09:49:26 compute-0 systemd[1]: run-r9959b08219114035982ec92300493274.service: Deactivated successfully.
Oct  9 09:49:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:27.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:27.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:27.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:27.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:27 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  9 09:49:27 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  9 09:49:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:49:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:27.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:28 compute-0 python3.9[171767]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:49:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:28.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:49:28 compute-0 python3.9[171918]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:49:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:28.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:28.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:28.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:29.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:29 compute-0 python3.9[172074]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:30.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:30 compute-0 python3.9[172253]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:49:30 compute-0 systemd[1]: Reloading.
Oct  9 09:49:30 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:30 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:49:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:31.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:31 compute-0 python3.9[172438]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:49:31 compute-0 network[172455]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:49:31 compute-0 network[172456]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:49:31 compute-0 network[172457]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:49:31 compute-0 podman[172462]: 2025-10-09 09:49:31.723538148 +0000 UTC m=+0.041337106 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  9 09:49:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:32] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:49:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:32] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:49:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:32.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:33.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:49:33 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:49:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:49:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:49:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:49:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:49:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:49:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:49:33 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:49:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:49:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:49:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:49:33 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:49:33 compute-0 podman[172725]: 2025-10-09 09:49:33.995868566 +0000 UTC m=+0.026949392 container create 5cfd6efd47b63f47397feffca65c667497f87ad0a3ee5cd30f92e11c64b041b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:49:34 compute-0 systemd[1]: Started libpod-conmon-5cfd6efd47b63f47397feffca65c667497f87ad0a3ee5cd30f92e11c64b041b6.scope.
Oct  9 09:49:34 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:49:34 compute-0 podman[172725]: 2025-10-09 09:49:34.058352182 +0000 UTC m=+0.089433028 container init 5cfd6efd47b63f47397feffca65c667497f87ad0a3ee5cd30f92e11c64b041b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ellis, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 09:49:34 compute-0 podman[172725]: 2025-10-09 09:49:34.062927368 +0000 UTC m=+0.094008194 container start 5cfd6efd47b63f47397feffca65c667497f87ad0a3ee5cd30f92e11c64b041b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ellis, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:49:34 compute-0 podman[172725]: 2025-10-09 09:49:34.064024754 +0000 UTC m=+0.095105581 container attach 5cfd6efd47b63f47397feffca65c667497f87ad0a3ee5cd30f92e11c64b041b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ellis, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:49:34 compute-0 loving_ellis[172740]: 167 167
Oct  9 09:49:34 compute-0 systemd[1]: libpod-5cfd6efd47b63f47397feffca65c667497f87ad0a3ee5cd30f92e11c64b041b6.scope: Deactivated successfully.
Oct  9 09:49:34 compute-0 conmon[172740]: conmon 5cfd6efd47b63f47397f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5cfd6efd47b63f47397feffca65c667497f87ad0a3ee5cd30f92e11c64b041b6.scope/container/memory.events
Oct  9 09:49:34 compute-0 podman[172725]: 2025-10-09 09:49:34.067411453 +0000 UTC m=+0.098492279 container died 5cfd6efd47b63f47397feffca65c667497f87ad0a3ee5cd30f92e11c64b041b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ellis, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b84da181e94f2fc6968deb015fd087fa8a10bc95559ca8216ef18bf9e14e353-merged.mount: Deactivated successfully.
Oct  9 09:49:34 compute-0 podman[172725]: 2025-10-09 09:49:33.984665137 +0000 UTC m=+0.015745983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:49:34 compute-0 podman[172725]: 2025-10-09 09:49:34.087903482 +0000 UTC m=+0.118984308 container remove 5cfd6efd47b63f47397feffca65c667497f87ad0a3ee5cd30f92e11c64b041b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_ellis, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Oct  9 09:49:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:49:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:49:34 compute-0 systemd[1]: libpod-conmon-5cfd6efd47b63f47397feffca65c667497f87ad0a3ee5cd30f92e11c64b041b6.scope: Deactivated successfully.
Oct  9 09:49:34 compute-0 podman[172762]: 2025-10-09 09:49:34.211750777 +0000 UTC m=+0.029112886 container create d36d2e50795ce6e0e3339c1f133f29d4878ff4e6147197b367d83a15004f8ec0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 09:49:34 compute-0 systemd[1]: Started libpod-conmon-d36d2e50795ce6e0e3339c1f133f29d4878ff4e6147197b367d83a15004f8ec0.scope.
Oct  9 09:49:34 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34c87e31dedbb363855b0443b52b88fdc67b4e44282d0978b41acd62d34f70f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34c87e31dedbb363855b0443b52b88fdc67b4e44282d0978b41acd62d34f70f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34c87e31dedbb363855b0443b52b88fdc67b4e44282d0978b41acd62d34f70f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34c87e31dedbb363855b0443b52b88fdc67b4e44282d0978b41acd62d34f70f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34c87e31dedbb363855b0443b52b88fdc67b4e44282d0978b41acd62d34f70f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:34 compute-0 podman[172762]: 2025-10-09 09:49:34.264083147 +0000 UTC m=+0.081445266 container init d36d2e50795ce6e0e3339c1f133f29d4878ff4e6147197b367d83a15004f8ec0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:49:34 compute-0 podman[172762]: 2025-10-09 09:49:34.270032451 +0000 UTC m=+0.087394560 container start d36d2e50795ce6e0e3339c1f133f29d4878ff4e6147197b367d83a15004f8ec0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hamilton, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:49:34 compute-0 podman[172762]: 2025-10-09 09:49:34.274177045 +0000 UTC m=+0.091539174 container attach d36d2e50795ce6e0e3339c1f133f29d4878ff4e6147197b367d83a15004f8ec0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:49:34 compute-0 podman[172762]: 2025-10-09 09:49:34.200892459 +0000 UTC m=+0.018254578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:49:34 compute-0 frosty_hamilton[172775]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:49:34 compute-0 frosty_hamilton[172775]: --> All data devices are unavailable
Oct  9 09:49:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:49:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:49:34 compute-0 systemd[1]: libpod-d36d2e50795ce6e0e3339c1f133f29d4878ff4e6147197b367d83a15004f8ec0.scope: Deactivated successfully.
Oct  9 09:49:34 compute-0 podman[172762]: 2025-10-09 09:49:34.561649443 +0000 UTC m=+0.379011552 container died d36d2e50795ce6e0e3339c1f133f29d4878ff4e6147197b367d83a15004f8ec0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hamilton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:49:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a34c87e31dedbb363855b0443b52b88fdc67b4e44282d0978b41acd62d34f70f-merged.mount: Deactivated successfully.
Oct  9 09:49:34 compute-0 podman[172762]: 2025-10-09 09:49:34.599544576 +0000 UTC m=+0.416906685 container remove d36d2e50795ce6e0e3339c1f133f29d4878ff4e6147197b367d83a15004f8ec0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:49:34 compute-0 systemd[1]: libpod-conmon-d36d2e50795ce6e0e3339c1f133f29d4878ff4e6147197b367d83a15004f8ec0.scope: Deactivated successfully.
Oct  9 09:49:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:34.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:35 compute-0 podman[172944]: 2025-10-09 09:49:35.021674324 +0000 UTC m=+0.027961487 container create ace16f3f7ba89fa4537ac5a9d7c474f0cbfc53d4179d538aba051cc9a5ae10d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:49:35 compute-0 systemd[1]: Started libpod-conmon-ace16f3f7ba89fa4537ac5a9d7c474f0cbfc53d4179d538aba051cc9a5ae10d5.scope.
Oct  9 09:49:35 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:49:35 compute-0 podman[172944]: 2025-10-09 09:49:35.075253944 +0000 UTC m=+0.081541127 container init ace16f3f7ba89fa4537ac5a9d7c474f0cbfc53d4179d538aba051cc9a5ae10d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:49:35 compute-0 podman[172944]: 2025-10-09 09:49:35.080075043 +0000 UTC m=+0.086362205 container start ace16f3f7ba89fa4537ac5a9d7c474f0cbfc53d4179d538aba051cc9a5ae10d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 09:49:35 compute-0 podman[172944]: 2025-10-09 09:49:35.081113919 +0000 UTC m=+0.087401102 container attach ace16f3f7ba89fa4537ac5a9d7c474f0cbfc53d4179d538aba051cc9a5ae10d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_booth, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:49:35 compute-0 practical_booth[172958]: 167 167
Oct  9 09:49:35 compute-0 systemd[1]: libpod-ace16f3f7ba89fa4537ac5a9d7c474f0cbfc53d4179d538aba051cc9a5ae10d5.scope: Deactivated successfully.
Oct  9 09:49:35 compute-0 conmon[172958]: conmon ace16f3f7ba89fa4537a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ace16f3f7ba89fa4537ac5a9d7c474f0cbfc53d4179d538aba051cc9a5ae10d5.scope/container/memory.events
Oct  9 09:49:35 compute-0 podman[172944]: 2025-10-09 09:49:35.084434552 +0000 UTC m=+0.090721935 container died ace16f3f7ba89fa4537ac5a9d7c474f0cbfc53d4179d538aba051cc9a5ae10d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b38b5f56aee5165557c23fa5db732336698d18d8ab0aa6011bd80f7f45d8dc3-merged.mount: Deactivated successfully.
Oct  9 09:49:35 compute-0 podman[172944]: 2025-10-09 09:49:35.102979475 +0000 UTC m=+0.109266639 container remove ace16f3f7ba89fa4537ac5a9d7c474f0cbfc53d4179d538aba051cc9a5ae10d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 09:49:35 compute-0 podman[172944]: 2025-10-09 09:49:35.010694157 +0000 UTC m=+0.016981320 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:49:35 compute-0 systemd[1]: libpod-conmon-ace16f3f7ba89fa4537ac5a9d7c474f0cbfc53d4179d538aba051cc9a5ae10d5.scope: Deactivated successfully.
Oct  9 09:49:35 compute-0 podman[173032]: 2025-10-09 09:49:35.224940267 +0000 UTC m=+0.028293383 container create c5e7461490eea7cfb6da9d3702aaa0115be8fee03b7dc13dd0fef72cc8fb7f47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 09:49:35 compute-0 systemd[1]: Started libpod-conmon-c5e7461490eea7cfb6da9d3702aaa0115be8fee03b7dc13dd0fef72cc8fb7f47.scope.
Oct  9 09:49:35 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad8084b4b352473300bcb23c7edbbed02eb1df2cf7ef2eb758ed85c13a22d96f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad8084b4b352473300bcb23c7edbbed02eb1df2cf7ef2eb758ed85c13a22d96f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad8084b4b352473300bcb23c7edbbed02eb1df2cf7ef2eb758ed85c13a22d96f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad8084b4b352473300bcb23c7edbbed02eb1df2cf7ef2eb758ed85c13a22d96f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:35 compute-0 podman[173032]: 2025-10-09 09:49:35.278630134 +0000 UTC m=+0.081983249 container init c5e7461490eea7cfb6da9d3702aaa0115be8fee03b7dc13dd0fef72cc8fb7f47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:49:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:35.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:35 compute-0 podman[173032]: 2025-10-09 09:49:35.285760331 +0000 UTC m=+0.089113436 container start c5e7461490eea7cfb6da9d3702aaa0115be8fee03b7dc13dd0fef72cc8fb7f47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:49:35 compute-0 podman[173032]: 2025-10-09 09:49:35.286993574 +0000 UTC m=+0.090346679 container attach c5e7461490eea7cfb6da9d3702aaa0115be8fee03b7dc13dd0fef72cc8fb7f47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 09:49:35 compute-0 podman[173032]: 2025-10-09 09:49:35.213570486 +0000 UTC m=+0.016923611 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]: {
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:    "1": [
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:        {
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "devices": [
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "/dev/loop3"
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            ],
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "lv_name": "ceph_lv0",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "lv_size": "21470642176",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "name": "ceph_lv0",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "tags": {
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.cluster_name": "ceph",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.crush_device_class": "",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.encrypted": "0",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.osd_id": "1",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.type": "block",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.vdo": "0",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:                "ceph.with_tpm": "0"
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            },
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "type": "block",
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:            "vg_name": "ceph_vg0"
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:        }
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]:    ]
Oct  9 09:49:35 compute-0 xenodochial_blackwell[173069]: }
Oct  9 09:49:35 compute-0 systemd[1]: libpod-c5e7461490eea7cfb6da9d3702aaa0115be8fee03b7dc13dd0fef72cc8fb7f47.scope: Deactivated successfully.
Oct  9 09:49:35 compute-0 podman[173131]: 2025-10-09 09:49:35.565254384 +0000 UTC m=+0.023028798 container died c5e7461490eea7cfb6da9d3702aaa0115be8fee03b7dc13dd0fef72cc8fb7f47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 09:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad8084b4b352473300bcb23c7edbbed02eb1df2cf7ef2eb758ed85c13a22d96f-merged.mount: Deactivated successfully.
Oct  9 09:49:35 compute-0 podman[173131]: 2025-10-09 09:49:35.591965115 +0000 UTC m=+0.049739499 container remove c5e7461490eea7cfb6da9d3702aaa0115be8fee03b7dc13dd0fef72cc8fb7f47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_blackwell, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:49:35 compute-0 systemd[1]: libpod-conmon-c5e7461490eea7cfb6da9d3702aaa0115be8fee03b7dc13dd0fef72cc8fb7f47.scope: Deactivated successfully.
Oct  9 09:49:35 compute-0 python3.9[173126]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:49:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:36 compute-0 podman[173376]: 2025-10-09 09:49:36.071786707 +0000 UTC m=+0.034433966 container create 677af2d45ca6918ef3ec1097aa5a8e6f12bce0805b4f2b6353d0b3f07c32433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:49:36 compute-0 systemd[1]: Started libpod-conmon-677af2d45ca6918ef3ec1097aa5a8e6f12bce0805b4f2b6353d0b3f07c32433d.scope.
Oct  9 09:49:36 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:49:36 compute-0 podman[173376]: 2025-10-09 09:49:36.137886333 +0000 UTC m=+0.100533603 container init 677af2d45ca6918ef3ec1097aa5a8e6f12bce0805b4f2b6353d0b3f07c32433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elion, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:49:36 compute-0 podman[173376]: 2025-10-09 09:49:36.143377905 +0000 UTC m=+0.106025164 container start 677af2d45ca6918ef3ec1097aa5a8e6f12bce0805b4f2b6353d0b3f07c32433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:49:36 compute-0 podman[173376]: 2025-10-09 09:49:36.144819669 +0000 UTC m=+0.107466928 container attach 677af2d45ca6918ef3ec1097aa5a8e6f12bce0805b4f2b6353d0b3f07c32433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:49:36 compute-0 inspiring_elion[173389]: 167 167
Oct  9 09:49:36 compute-0 systemd[1]: libpod-677af2d45ca6918ef3ec1097aa5a8e6f12bce0805b4f2b6353d0b3f07c32433d.scope: Deactivated successfully.
Oct  9 09:49:36 compute-0 podman[173376]: 2025-10-09 09:49:36.147627297 +0000 UTC m=+0.110274556 container died 677af2d45ca6918ef3ec1097aa5a8e6f12bce0805b4f2b6353d0b3f07c32433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:49:36 compute-0 podman[173376]: 2025-10-09 09:49:36.058921159 +0000 UTC m=+0.021568438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ba087069ae050f1a6a1834af826048e47e7889052f6e1cabc81ca694f4bf81b-merged.mount: Deactivated successfully.
Oct  9 09:49:36 compute-0 podman[173376]: 2025-10-09 09:49:36.174246084 +0000 UTC m=+0.136893343 container remove 677af2d45ca6918ef3ec1097aa5a8e6f12bce0805b4f2b6353d0b3f07c32433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:49:36 compute-0 systemd[1]: libpod-conmon-677af2d45ca6918ef3ec1097aa5a8e6f12bce0805b4f2b6353d0b3f07c32433d.scope: Deactivated successfully.
Oct  9 09:49:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:36 compute-0 python3.9[173374]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:36 compute-0 podman[173411]: 2025-10-09 09:49:36.301027995 +0000 UTC m=+0.031788664 container create 61d6e314c494d2492122ff3176aadfe0bc335d0e4c61562963060b88a354b3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:49:36 compute-0 systemd[1]: Started libpod-conmon-61d6e314c494d2492122ff3176aadfe0bc335d0e4c61562963060b88a354b3e0.scope.
Oct  9 09:49:36 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e113ca94c97dd07850a9793c95ca5ac00cb14d1e11b15eba4f61f406e4c229/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e113ca94c97dd07850a9793c95ca5ac00cb14d1e11b15eba4f61f406e4c229/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e113ca94c97dd07850a9793c95ca5ac00cb14d1e11b15eba4f61f406e4c229/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e113ca94c97dd07850a9793c95ca5ac00cb14d1e11b15eba4f61f406e4c229/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:36 compute-0 podman[173411]: 2025-10-09 09:49:36.362212786 +0000 UTC m=+0.092973466 container init 61d6e314c494d2492122ff3176aadfe0bc335d0e4c61562963060b88a354b3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:49:36 compute-0 podman[173411]: 2025-10-09 09:49:36.368669014 +0000 UTC m=+0.099429673 container start 61d6e314c494d2492122ff3176aadfe0bc335d0e4c61562963060b88a354b3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 09:49:36 compute-0 podman[173411]: 2025-10-09 09:49:36.36982391 +0000 UTC m=+0.100584569 container attach 61d6e314c494d2492122ff3176aadfe0bc335d0e4c61562963060b88a354b3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  9 09:49:36 compute-0 podman[173411]: 2025-10-09 09:49:36.287675971 +0000 UTC m=+0.018436650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:49:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:36.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:36 compute-0 lvm[173653]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:49:36 compute-0 lvm[173653]: VG ceph_vg0 finished
Oct  9 09:49:36 compute-0 stupefied_gates[173425]: {}
Oct  9 09:49:36 compute-0 systemd[1]: libpod-61d6e314c494d2492122ff3176aadfe0bc335d0e4c61562963060b88a354b3e0.scope: Deactivated successfully.
Oct  9 09:49:36 compute-0 podman[173411]: 2025-10-09 09:49:36.952735907 +0000 UTC m=+0.683496566 container died 61d6e314c494d2492122ff3176aadfe0bc335d0e4c61562963060b88a354b3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_gates, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:49:36 compute-0 python3.9[173613]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-46e113ca94c97dd07850a9793c95ca5ac00cb14d1e11b15eba4f61f406e4c229-merged.mount: Deactivated successfully.
Oct  9 09:49:36 compute-0 podman[173411]: 2025-10-09 09:49:36.979793732 +0000 UTC m=+0.710554391 container remove 61d6e314c494d2492122ff3176aadfe0bc335d0e4c61562963060b88a354b3e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:49:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:37.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:49:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:37.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:37.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:37.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:37 compute-0 systemd[1]: libpod-conmon-61d6e314c494d2492122ff3176aadfe0bc335d0e4c61562963060b88a354b3e0.scope: Deactivated successfully.
Oct  9 09:49:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:49:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:49:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:37.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:49:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:49:37 compute-0 python3.9[173843]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:38 compute-0 python3.9[173996]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:38.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:38.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:38.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:38.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:38.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:38 compute-0 python3.9[174151]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:49:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:39.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:49:39 compute-0 python3.9[174304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:49:40 compute-0 python3.9[174457]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:40.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:40 compute-0 python3.9[174612]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:49:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:41.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:49:41 compute-0 python3.9[174764]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:49:41 compute-0 python3.9[174916]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:42] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Oct  9 09:49:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:42] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Oct  9 09:49:42 compute-0 python3.9[175069]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:42.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:42 compute-0 podman[175194]: 2025-10-09 09:49:42.730630993 +0000 UTC m=+0.039193560 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  9 09:49:42 compute-0 python3.9[175237]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:43.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:43 compute-0 python3.9[175391]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:49:43 compute-0 python3.9[175543]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:44 compute-0 python3.9[175696]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:44 compute-0 podman[175821]: 2025-10-09 09:49:44.548837752 +0000 UTC m=+0.037735744 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:49:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:44.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:44 compute-0 python3.9[175866]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:45 compute-0 python3.9[176018]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:45.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:45 compute-0 python3.9[176170]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:49:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:46 compute-0 python3.9[176322]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:46 compute-0 python3.9[176475]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:46.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:46 compute-0 python3.9[176628]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:47.010Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:47.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:47.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:47.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:47.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:47 compute-0 python3.9[176780]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:47 compute-0 python3.9[176932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:48 compute-0 python3.9[177085]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:49:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:48.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:49:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:48.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:48.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:48.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:48.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:49 compute-0 python3.9[177238]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:49:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:49.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:49:49
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.nfs', 'default.rgw.log', '.mgr', 'default.rgw.control', 'vms']
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:49:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:49:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:49:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:49:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:50 compute-0 python3.9[177390]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:49:50 compute-0 systemd[1]: Reloading.
Oct  9 09:49:50 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:50 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:50.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:50 compute-0 python3.9[177604]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:49:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:51.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:49:51 compute-0 python3.9[177757]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:49:51 compute-0 python3.9[177910]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:52 compute-0 python3.9[178064]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:52] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Oct  9 09:49:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:49:52] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Oct  9 09:49:52 compute-0 python3.9[178217]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:52.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:52 compute-0 podman[178220]: 2025-10-09 09:49:52.729265948 +0000 UTC m=+0.065551014 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 09:49:53 compute-0 python3.9[178394]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:53.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:49:53 compute-0 python3.9[178547]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:54 compute-0 python3.9[178701]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:54.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:55.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:49:55 compute-0 python3.9[178855]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:56 compute-0 python3.9[179008]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:56.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:56 compute-0 python3.9[179161]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:57.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:57.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:57.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:57.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:57.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:57 compute-0 python3.9[179313]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:49:57 compute-0 python3.9[179465]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:58 compute-0 python3.9[179618]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:49:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:58.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:49:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:58.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:58.856Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:58.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:49:58.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:49:58 compute-0 python3.9[179771]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:49:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:49:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:49:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:59.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:59 compute-0 python3.9[179923]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:49:59 compute-0 python3.9[180075]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:00 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 09:50:00 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct  9 09:50:00 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.douegr on compute-1 is in error state
Oct  9 09:50:00 compute-0 systemd[1]: Starting system activity accounting tool...
Oct  9 09:50:00 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 09:50:00 compute-0 systemd[1]: Finished system activity accounting tool.
Oct  9 09:50:00 compute-0 python3.9[180229]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:00 compute-0 ceph-mon[4497]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 09:50:00 compute-0 ceph-mon[4497]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct  9 09:50:00 compute-0 ceph-mon[4497]:    daemon nfs.cephfs.0.0.compute-1.douegr on compute-1 is in error state
Oct  9 09:50:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:00.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:00 compute-0 python3.9[180382]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:01.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:01 compute-0 python3.9[180534]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:50:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:50:02] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Oct  9 09:50:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:50:02] "GET /metrics HTTP/1.1" 200 48418 "" "Prometheus/2.51.0"
Oct  9 09:50:02 compute-0 podman[180561]: 2025-10-09 09:50:02.596647893 +0000 UTC m=+0.039802326 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:50:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:02.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:03.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:50:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:50:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:04.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:05.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:50:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:06 compute-0 python3.9[180708]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  9 09:50:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:06.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:07.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:07.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:07.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:07.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:07 compute-0 python3.9[180862]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 09:50:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:07.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:07 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:07 compute-0 python3.9[181020]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 09:50:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:08 compute-0 systemd-logind[798]: New session 40 of user zuul.
Oct  9 09:50:08 compute-0 systemd[1]: Started Session 40 of User zuul.
Oct  9 09:50:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:08.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:08 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Oct  9 09:50:08 compute-0 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Oct  9 09:50:08 compute-0 systemd-logind[798]: Removed session 40.
Oct  9 09:50:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:08.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:08.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:08.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:08.861Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:09 compute-0 python3.9[181208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:09.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:09 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:09 compute-0 python3.9[181329]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003408.9066176-4351-225733427026305/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:10 compute-0 python3.9[181479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:50:10.100 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:50:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:50:10.101 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:50:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:50:10.101 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:50:10 compute-0 python3.9[181556]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:10.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:10 compute-0 python3.9[181732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:11 compute-0 python3.9[181853]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003410.5194566-4351-33300216513241/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:11.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:50:11 compute-0 python3.9[182003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:12 compute-0 python3.9[182124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003411.3377407-4351-31986704099617/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:50:12] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Oct  9 09:50:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:50:12] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Oct  9 09:50:12 compute-0 python3.9[182275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:12.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:12 compute-0 python3.9[182397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003412.170589-4351-139412545266781/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:12 compute-0 podman[182398]: 2025-10-09 09:50:12.972269976 +0000 UTC m=+0.041711521 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:50:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:13.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:13 compute-0 python3.9[182565]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:14 compute-0 python3.9[182717]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:14 compute-0 python3.9[182870]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:50:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:14.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:50:15 compute-0 podman[182995]: 2025-10-09 09:50:15.003836035 +0000 UTC m=+0.041023465 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:50:15 compute-0 python3.9[183041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:15.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:15 compute-0 python3.9[183164]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760003414.7952318-4630-215933835657471/.source _original_basename=.a5fokk8p follow=False checksum=fc095ef07bcbff608045cfd551b648c11f58840b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  9 09:50:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:50:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:16 compute-0 python3.9[183317]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:16.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:16 compute-0 python3.9[183470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:17.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:17.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:17.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:17.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:17 compute-0 python3.9[183591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003416.458163-4708-57457070094613/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:17 compute-0 python3.9[183741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:18 compute-0 python3.9[183862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003417.351955-4753-256476420233167/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:50:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:18.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:50:18 compute-0 python3.9[184016]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  9 09:50:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:18.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:18.864Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:18.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:18.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:19.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:19 compute-0 python3.9[184168]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:50:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:50:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:50:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:50:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:50:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:50:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:50:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:50:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:50:20 compute-0 python3[184320]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:50:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:20.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:21.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:50:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:50:22] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Oct  9 09:50:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:50:22] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Oct  9 09:50:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:22.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:23.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:23 compute-0 podman[184355]: 2025-10-09 09:50:23.651941989 +0000 UTC m=+0.096296922 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  9 09:50:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:24.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:25.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:50:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:26.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:27.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:27.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:27.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:27.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:27.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:28.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:28.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:28.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:28.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:28.860Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:29.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:30.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:50:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:31.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:50:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:50:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:50:32] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Oct  9 09:50:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:50:32] "GET /metrics HTTP/1.1" 200 48420 "" "Prometheus/2.51.0"
Oct  9 09:50:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:32.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:33 compute-0 podman[184331]: 2025-10-09 09:50:33.030851221 +0000 UTC m=+12.796091680 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  9 09:50:33 compute-0 podman[184458]: 2025-10-09 09:50:33.122074118 +0000 UTC m=+0.027035765 container create 75820f8cfb9efc3f3a845f4daedc993d01d905fb305e8ecdf7bbe1ac010a3dca (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute_init, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:50:33 compute-0 podman[184458]: 2025-10-09 09:50:33.109430286 +0000 UTC m=+0.014391952 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  9 09:50:33 compute-0 python3[184320]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  9 09:50:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:33.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:33 compute-0 podman[184608]: 2025-10-09 09:50:33.598295129 +0000 UTC m=+0.038894558 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  9 09:50:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:33 compute-0 python3.9[184653]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:50:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:50:34 compute-0 python3.9[184809]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  9 09:50:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:34.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:35 compute-0 python3.9[184961]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:50:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:35.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:50:36 compute-0 python3[185113]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:50:36 compute-0 podman[185143]: 2025-10-09 09:50:36.162617915 +0000 UTC m=+0.027419409 container create 11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:50:36 compute-0 podman[185143]: 2025-10-09 09:50:36.149218686 +0000 UTC m=+0.014020191 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  9 09:50:36 compute-0 python3[185113]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct  9 09:50:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:36.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:36 compute-0 python3.9[185322]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:37.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:37.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:37.031Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:37.031Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:50:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:37.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:50:37 compute-0 python3.9[185487]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:50:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:50:37 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:50:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:50:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:50:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:50:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:50:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:50:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:50:37 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:50:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:50:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:50:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:50:37 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:50:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:50:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:37 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:50:37 compute-0 python3.9[185706]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003437.469832-5029-83912396986805/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:37 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct  9 09:50:37 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:37.991002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:50:37 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct  9 09:50:37 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003437991031, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4202, "num_deletes": 502, "total_data_size": 8536109, "memory_usage": 8714208, "flush_reason": "Manual Compaction"}
Oct  9 09:50:37 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438004220, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8291926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13047, "largest_seqno": 17247, "table_properties": {"data_size": 8274219, "index_size": 11961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36680, "raw_average_key_size": 19, "raw_value_size": 8237464, "raw_average_value_size": 4428, "num_data_blocks": 522, "num_entries": 1860, "num_filter_entries": 1860, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002994, "oldest_key_time": 1760002994, "file_creation_time": 1760003437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13245 microseconds, and 10160 cpu microseconds.
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.004247) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8291926 bytes OK
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.004261) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.004571) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.004584) EVENT_LOG_v1 {"time_micros": 1760003438004580, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.004595) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8519309, prev total WAL file size 8519309, number of live WAL files 2.
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.005671) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8097KB)], [32(11MB)]
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438005704, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 19849816, "oldest_snapshot_seqno": -1}
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4992 keys, 15243964 bytes, temperature: kUnknown
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438035889, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15243964, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15205872, "index_size": 24542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 124698, "raw_average_key_size": 24, "raw_value_size": 15110531, "raw_average_value_size": 3026, "num_data_blocks": 1034, "num_entries": 4992, "num_filter_entries": 4992, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760003438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.036151) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15243964 bytes
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.047645) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 654.6 rd, 502.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.0 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(4.2) write-amplify(1.8) OK, records in: 6015, records dropped: 1023 output_compression: NoCompression
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.047659) EVENT_LOG_v1 {"time_micros": 1760003438047653, "job": 14, "event": "compaction_finished", "compaction_time_micros": 30322, "compaction_time_cpu_micros": 22660, "output_level": 6, "num_output_files": 1, "total_output_size": 15243964, "num_input_records": 6015, "num_output_records": 4992, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438050307, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438051642, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.005614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.051682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.051686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.051688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.051689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:50:38.051690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-0 podman[185855]: 2025-10-09 09:50:38.216981696 +0000 UTC m=+0.029992120 container create 79e90bda9afa9b0b8464c66595a81b7c9ccf153edd8ae890f0128d5ffa026fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_moser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:50:38 compute-0 systemd[1]: Started libpod-conmon-79e90bda9afa9b0b8464c66595a81b7c9ccf153edd8ae890f0128d5ffa026fd6.scope.
Oct  9 09:50:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:50:38 compute-0 podman[185855]: 2025-10-09 09:50:38.284972634 +0000 UTC m=+0.097983058 container init 79e90bda9afa9b0b8464c66595a81b7c9ccf153edd8ae890f0128d5ffa026fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:50:38 compute-0 podman[185855]: 2025-10-09 09:50:38.290536095 +0000 UTC m=+0.103546510 container start 79e90bda9afa9b0b8464c66595a81b7c9ccf153edd8ae890f0128d5ffa026fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:50:38 compute-0 quizzical_moser[185877]: 167 167
Oct  9 09:50:38 compute-0 systemd[1]: libpod-79e90bda9afa9b0b8464c66595a81b7c9ccf153edd8ae890f0128d5ffa026fd6.scope: Deactivated successfully.
Oct  9 09:50:38 compute-0 podman[185855]: 2025-10-09 09:50:38.30062431 +0000 UTC m=+0.113634744 container attach 79e90bda9afa9b0b8464c66595a81b7c9ccf153edd8ae890f0128d5ffa026fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_moser, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:50:38 compute-0 conmon[185877]: conmon 79e90bda9afa9b0b8464 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79e90bda9afa9b0b8464c66595a81b7c9ccf153edd8ae890f0128d5ffa026fd6.scope/container/memory.events
Oct  9 09:50:38 compute-0 podman[185855]: 2025-10-09 09:50:38.301565193 +0000 UTC m=+0.114575608 container died 79e90bda9afa9b0b8464c66595a81b7c9ccf153edd8ae890f0128d5ffa026fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 09:50:38 compute-0 podman[185855]: 2025-10-09 09:50:38.204347391 +0000 UTC m=+0.017357825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb2248f44d066263a441b5a5c82403ca7637b3d1998777af3a9d44282fdb8ae9-merged.mount: Deactivated successfully.
Oct  9 09:50:38 compute-0 podman[185855]: 2025-10-09 09:50:38.331890162 +0000 UTC m=+0.144900576 container remove 79e90bda9afa9b0b8464c66595a81b7c9ccf153edd8ae890f0128d5ffa026fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_moser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:50:38 compute-0 systemd[1]: libpod-conmon-79e90bda9afa9b0b8464c66595a81b7c9ccf153edd8ae890f0128d5ffa026fd6.scope: Deactivated successfully.
Oct  9 09:50:38 compute-0 podman[185899]: 2025-10-09 09:50:38.448918193 +0000 UTC m=+0.027476266 container create eedcb58f70a3a1ce716736300a818e28906bb836f00bcef3f8b5206ff3d78db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hypatia, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:50:38 compute-0 python3.9[185872]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:50:38 compute-0 systemd[1]: Started libpod-conmon-eedcb58f70a3a1ce716736300a818e28906bb836f00bcef3f8b5206ff3d78db0.scope.
Oct  9 09:50:38 compute-0 systemd[1]: Reloading.
Oct  9 09:50:38 compute-0 podman[185899]: 2025-10-09 09:50:38.438586189 +0000 UTC m=+0.017144252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:50:38 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:50:38 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:50:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:50:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:38.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:50:38 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da112f83f46b1fcb5d844bab52638016903e205bb3b1e4e7914aed1cf8ca18d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da112f83f46b1fcb5d844bab52638016903e205bb3b1e4e7914aed1cf8ca18d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da112f83f46b1fcb5d844bab52638016903e205bb3b1e4e7914aed1cf8ca18d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da112f83f46b1fcb5d844bab52638016903e205bb3b1e4e7914aed1cf8ca18d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da112f83f46b1fcb5d844bab52638016903e205bb3b1e4e7914aed1cf8ca18d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:38 compute-0 podman[185899]: 2025-10-09 09:50:38.753539931 +0000 UTC m=+0.332098025 container init eedcb58f70a3a1ce716736300a818e28906bb836f00bcef3f8b5206ff3d78db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hypatia, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:50:38 compute-0 podman[185899]: 2025-10-09 09:50:38.761107422 +0000 UTC m=+0.339665485 container start eedcb58f70a3a1ce716736300a818e28906bb836f00bcef3f8b5206ff3d78db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hypatia, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 09:50:38 compute-0 podman[185899]: 2025-10-09 09:50:38.764026426 +0000 UTC m=+0.342584489 container attach eedcb58f70a3a1ce716736300a818e28906bb836f00bcef3f8b5206ff3d78db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 09:50:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:38.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:38.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:38.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:38.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:39 compute-0 awesome_hypatia[185914]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:50:39 compute-0 awesome_hypatia[185914]: --> All data devices are unavailable
Oct  9 09:50:39 compute-0 systemd[1]: libpod-eedcb58f70a3a1ce716736300a818e28906bb836f00bcef3f8b5206ff3d78db0.scope: Deactivated successfully.
Oct  9 09:50:39 compute-0 podman[185899]: 2025-10-09 09:50:39.033832127 +0000 UTC m=+0.612390190 container died eedcb58f70a3a1ce716736300a818e28906bb836f00bcef3f8b5206ff3d78db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:50:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-da112f83f46b1fcb5d844bab52638016903e205bb3b1e4e7914aed1cf8ca18d4-merged.mount: Deactivated successfully.
Oct  9 09:50:39 compute-0 podman[185899]: 2025-10-09 09:50:39.062269465 +0000 UTC m=+0.640827527 container remove eedcb58f70a3a1ce716736300a818e28906bb836f00bcef3f8b5206ff3d78db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:50:39 compute-0 systemd[1]: libpod-conmon-eedcb58f70a3a1ce716736300a818e28906bb836f00bcef3f8b5206ff3d78db0.scope: Deactivated successfully.
Oct  9 09:50:39 compute-0 python3.9[186038]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:50:39 compute-0 systemd[1]: Reloading.
Oct  9 09:50:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:39.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:39 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:50:39 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:50:39 compute-0 podman[186168]: 2025-10-09 09:50:39.507118504 +0000 UTC m=+0.027856474 container create b2a1c33ec4edc0d0641d41223d5608ac42c2acc5d7d4d4d886758b43090a4374 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:50:39 compute-0 systemd[1]: Started libpod-conmon-b2a1c33ec4edc0d0641d41223d5608ac42c2acc5d7d4d4d886758b43090a4374.scope.
Oct  9 09:50:39 compute-0 systemd[1]: Starting nova_compute container...
Oct  9 09:50:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:50:39 compute-0 podman[186168]: 2025-10-09 09:50:39.494990663 +0000 UTC m=+0.015728633 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:50:39 compute-0 podman[186168]: 2025-10-09 09:50:39.601177311 +0000 UTC m=+0.121915301 container init b2a1c33ec4edc0d0641d41223d5608ac42c2acc5d7d4d4d886758b43090a4374 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 09:50:39 compute-0 podman[186168]: 2025-10-09 09:50:39.607307251 +0000 UTC m=+0.128045221 container start b2a1c33ec4edc0d0641d41223d5608ac42c2acc5d7d4d4d886758b43090a4374 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kapitsa, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 09:50:39 compute-0 podman[186168]: 2025-10-09 09:50:39.608492545 +0000 UTC m=+0.129230526 container attach b2a1c33ec4edc0d0641d41223d5608ac42c2acc5d7d4d4d886758b43090a4374 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 09:50:39 compute-0 nice_kapitsa[186183]: 167 167
Oct  9 09:50:39 compute-0 systemd[1]: libpod-b2a1c33ec4edc0d0641d41223d5608ac42c2acc5d7d4d4d886758b43090a4374.scope: Deactivated successfully.
Oct  9 09:50:39 compute-0 podman[186197]: 2025-10-09 09:50:39.637304859 +0000 UTC m=+0.017369206 container died b2a1c33ec4edc0d0641d41223d5608ac42c2acc5d7d4d4d886758b43090a4374 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:50:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:50:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-dce91d3c1169b87966bb66c9360b241dd2362bdef73ca499d40d06c8c5a9ccef-merged.mount: Deactivated successfully.
Oct  9 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-0 podman[186197]: 2025-10-09 09:50:39.663626167 +0000 UTC m=+0.043690504 container remove b2a1c33ec4edc0d0641d41223d5608ac42c2acc5d7d4d4d886758b43090a4374 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_kapitsa, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:50:39 compute-0 podman[186185]: 2025-10-09 09:50:39.667989014 +0000 UTC m=+0.072525900 container init 11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct  9 09:50:39 compute-0 systemd[1]: libpod-conmon-b2a1c33ec4edc0d0641d41223d5608ac42c2acc5d7d4d4d886758b43090a4374.scope: Deactivated successfully.
Oct  9 09:50:39 compute-0 podman[186185]: 2025-10-09 09:50:39.6758137 +0000 UTC m=+0.080350566 container start 11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:50:39 compute-0 podman[186185]: nova_compute
Oct  9 09:50:39 compute-0 nova_compute[186208]: + sudo -E kolla_set_configs
Oct  9 09:50:39 compute-0 systemd[1]: Started nova_compute container.
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Validating config file
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying service configuration files
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Deleting /etc/ceph
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Creating directory /etc/ceph
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/ceph
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Writing out command to execute
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:39 compute-0 nova_compute[186208]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:39 compute-0 nova_compute[186208]: ++ cat /run_command
Oct  9 09:50:39 compute-0 nova_compute[186208]: + CMD=nova-compute
Oct  9 09:50:39 compute-0 nova_compute[186208]: + ARGS=
Oct  9 09:50:39 compute-0 nova_compute[186208]: + sudo kolla_copy_cacerts
Oct  9 09:50:39 compute-0 nova_compute[186208]: + [[ ! -n '' ]]
Oct  9 09:50:39 compute-0 nova_compute[186208]: + . kolla_extend_start
Oct  9 09:50:39 compute-0 nova_compute[186208]: Running command: 'nova-compute'
Oct  9 09:50:39 compute-0 nova_compute[186208]: + echo 'Running command: '\''nova-compute'\'''
Oct  9 09:50:39 compute-0 nova_compute[186208]: + umask 0022
Oct  9 09:50:39 compute-0 nova_compute[186208]: + exec nova-compute
Oct  9 09:50:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:50:39 compute-0 podman[186255]: 2025-10-09 09:50:39.806475223 +0000 UTC m=+0.030287788 container create 7e228c757858e6983066144011e7a64737bdd9c7711c1d14bdced6fff2f09843 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:50:39 compute-0 systemd[1]: Started libpod-conmon-7e228c757858e6983066144011e7a64737bdd9c7711c1d14bdced6fff2f09843.scope.
Oct  9 09:50:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89aca587698949d2c147110d7de210a165bd6c77e2fb79da2ba8eb269d449995/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89aca587698949d2c147110d7de210a165bd6c77e2fb79da2ba8eb269d449995/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89aca587698949d2c147110d7de210a165bd6c77e2fb79da2ba8eb269d449995/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89aca587698949d2c147110d7de210a165bd6c77e2fb79da2ba8eb269d449995/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-0 podman[186255]: 2025-10-09 09:50:39.866463128 +0000 UTC m=+0.090275693 container init 7e228c757858e6983066144011e7a64737bdd9c7711c1d14bdced6fff2f09843 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 09:50:39 compute-0 podman[186255]: 2025-10-09 09:50:39.872229742 +0000 UTC m=+0.096042297 container start 7e228c757858e6983066144011e7a64737bdd9c7711c1d14bdced6fff2f09843 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_stonebraker, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:50:39 compute-0 podman[186255]: 2025-10-09 09:50:39.87352262 +0000 UTC m=+0.097335176 container attach 7e228c757858e6983066144011e7a64737bdd9c7711c1d14bdced6fff2f09843 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:50:39 compute-0 podman[186255]: 2025-10-09 09:50:39.795527026 +0000 UTC m=+0.019339572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]: {
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:    "1": [
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:        {
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "devices": [
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "/dev/loop3"
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            ],
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "lv_name": "ceph_lv0",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "lv_size": "21470642176",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "name": "ceph_lv0",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "tags": {
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.cluster_name": "ceph",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.crush_device_class": "",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.encrypted": "0",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.osd_id": "1",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.type": "block",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.vdo": "0",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:                "ceph.with_tpm": "0"
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            },
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "type": "block",
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:            "vg_name": "ceph_vg0"
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:        }
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]:    ]
Oct  9 09:50:40 compute-0 sad_stonebraker[186268]: }
Oct  9 09:50:40 compute-0 systemd[1]: libpod-7e228c757858e6983066144011e7a64737bdd9c7711c1d14bdced6fff2f09843.scope: Deactivated successfully.
Oct  9 09:50:40 compute-0 podman[186278]: 2025-10-09 09:50:40.143746722 +0000 UTC m=+0.017797944 container died 7e228c757858e6983066144011e7a64737bdd9c7711c1d14bdced6fff2f09843 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_stonebraker, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:50:40 compute-0 podman[186278]: 2025-10-09 09:50:40.163259359 +0000 UTC m=+0.037310581 container remove 7e228c757858e6983066144011e7a64737bdd9c7711c1d14bdced6fff2f09843 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:50:40 compute-0 systemd[1]: libpod-conmon-7e228c757858e6983066144011e7a64737bdd9c7711c1d14bdced6fff2f09843.scope: Deactivated successfully.
Oct  9 09:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-89aca587698949d2c147110d7de210a165bd6c77e2fb79da2ba8eb269d449995-merged.mount: Deactivated successfully.
Oct  9 09:50:40 compute-0 podman[186500]: 2025-10-09 09:50:40.614289904 +0000 UTC m=+0.042098201 container create fd5975a35385b7f4fe68828679e443eabb28d6b03291f6de6d2d819137c40399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_napier, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:50:40 compute-0 systemd[1]: Started libpod-conmon-fd5975a35385b7f4fe68828679e443eabb28d6b03291f6de6d2d819137c40399.scope.
Oct  9 09:50:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:50:40 compute-0 podman[186500]: 2025-10-09 09:50:40.668086134 +0000 UTC m=+0.095894440 container init fd5975a35385b7f4fe68828679e443eabb28d6b03291f6de6d2d819137c40399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_napier, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:50:40 compute-0 podman[186500]: 2025-10-09 09:50:40.674408727 +0000 UTC m=+0.102217022 container start fd5975a35385b7f4fe68828679e443eabb28d6b03291f6de6d2d819137c40399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_napier, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 09:50:40 compute-0 podman[186500]: 2025-10-09 09:50:40.675457605 +0000 UTC m=+0.103265901 container attach fd5975a35385b7f4fe68828679e443eabb28d6b03291f6de6d2d819137c40399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_napier, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:50:40 compute-0 blissful_napier[186513]: 167 167
Oct  9 09:50:40 compute-0 systemd[1]: libpod-fd5975a35385b7f4fe68828679e443eabb28d6b03291f6de6d2d819137c40399.scope: Deactivated successfully.
Oct  9 09:50:40 compute-0 conmon[186513]: conmon fd5975a35385b7f4fe68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd5975a35385b7f4fe68828679e443eabb28d6b03291f6de6d2d819137c40399.scope/container/memory.events
Oct  9 09:50:40 compute-0 podman[186500]: 2025-10-09 09:50:40.600921193 +0000 UTC m=+0.028729509 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:50:40 compute-0 podman[186518]: 2025-10-09 09:50:40.70974864 +0000 UTC m=+0.017734364 container died fd5975a35385b7f4fe68828679e443eabb28d6b03291f6de6d2d819137c40399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_napier, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:50:40 compute-0 python3.9[186498]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5651875ad4e5fa16818d2e25dbd05486b4978a238baee68ccf0bacf49b4db9e-merged.mount: Deactivated successfully.
Oct  9 09:50:40 compute-0 podman[186518]: 2025-10-09 09:50:40.734355745 +0000 UTC m=+0.042341469 container remove fd5975a35385b7f4fe68828679e443eabb28d6b03291f6de6d2d819137c40399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_napier, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:50:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:40.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:40 compute-0 systemd[1]: libpod-conmon-fd5975a35385b7f4fe68828679e443eabb28d6b03291f6de6d2d819137c40399.scope: Deactivated successfully.
Oct  9 09:50:40 compute-0 podman[186561]: 2025-10-09 09:50:40.858917875 +0000 UTC m=+0.028599083 container create 502666c01cde0993524f9678fe45240d1ab831190dc746f99e01cd3e9fa8fb14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:50:40 compute-0 systemd[1]: Started libpod-conmon-502666c01cde0993524f9678fe45240d1ab831190dc746f99e01cd3e9fa8fb14.scope.
Oct  9 09:50:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d240469a7a754945a874c84e2ea1a9f8f7bc2f266891fef0b76ca038963b5ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d240469a7a754945a874c84e2ea1a9f8f7bc2f266891fef0b76ca038963b5ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d240469a7a754945a874c84e2ea1a9f8f7bc2f266891fef0b76ca038963b5ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d240469a7a754945a874c84e2ea1a9f8f7bc2f266891fef0b76ca038963b5ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:40 compute-0 podman[186561]: 2025-10-09 09:50:40.913695475 +0000 UTC m=+0.083376693 container init 502666c01cde0993524f9678fe45240d1ab831190dc746f99e01cd3e9fa8fb14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_zhukovsky, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 09:50:40 compute-0 podman[186561]: 2025-10-09 09:50:40.918870023 +0000 UTC m=+0.088551232 container start 502666c01cde0993524f9678fe45240d1ab831190dc746f99e01cd3e9fa8fb14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:50:40 compute-0 podman[186561]: 2025-10-09 09:50:40.920032045 +0000 UTC m=+0.089713263 container attach 502666c01cde0993524f9678fe45240d1ab831190dc746f99e01cd3e9fa8fb14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_zhukovsky, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:50:40 compute-0 podman[186561]: 2025-10-09 09:50:40.847885881 +0000 UTC m=+0.017567109 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:50:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:41.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:41 compute-0 upbeat_zhukovsky[186574]: {}
Oct  9 09:50:41 compute-0 lvm[186777]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:50:41 compute-0 lvm[186777]: VG ceph_vg0 finished
Oct  9 09:50:41 compute-0 systemd[1]: libpod-502666c01cde0993524f9678fe45240d1ab831190dc746f99e01cd3e9fa8fb14.scope: Deactivated successfully.
Oct  9 09:50:41 compute-0 podman[186561]: 2025-10-09 09:50:41.462174641 +0000 UTC m=+0.631855859 container died 502666c01cde0993524f9678fe45240d1ab831190dc746f99e01cd3e9fa8fb14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 09:50:41 compute-0 python3.9[186759]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d240469a7a754945a874c84e2ea1a9f8f7bc2f266891fef0b76ca038963b5ac-merged.mount: Deactivated successfully.
Oct  9 09:50:41 compute-0 podman[186561]: 2025-10-09 09:50:41.489456488 +0000 UTC m=+0.659137697 container remove 502666c01cde0993524f9678fe45240d1ab831190dc746f99e01cd3e9fa8fb14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:50:41 compute-0 systemd[1]: libpod-conmon-502666c01cde0993524f9678fe45240d1ab831190dc746f99e01cd3e9fa8fb14.scope: Deactivated successfully.
Oct  9 09:50:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:50:41 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:50:41 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:41 compute-0 nova_compute[186208]: 2025-10-09 09:50:41.643 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:41 compute-0 nova_compute[186208]: 2025-10-09 09:50:41.644 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:41 compute-0 nova_compute[186208]: 2025-10-09 09:50:41.645 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:41 compute-0 nova_compute[186208]: 2025-10-09 09:50:41.645 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  9 09:50:41 compute-0 nova_compute[186208]: 2025-10-09 09:50:41.772 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:41 compute-0 nova_compute[186208]: 2025-10-09 09:50:41.783 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:50:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:42 compute-0 python3.9[186966]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.203 2 INFO nova.virt.driver [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  9 09:50:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:50:42] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Oct  9 09:50:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:50:42] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.284 2 INFO nova.compute.provider_config [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.310 2 DEBUG oslo_concurrency.lockutils [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.310 2 DEBUG oslo_concurrency.lockutils [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.310 2 DEBUG oslo_concurrency.lockutils [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.310 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.311 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.311 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.311 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.311 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.311 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.311 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.311 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.312 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.312 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.312 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.312 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.312 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.312 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.312 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.313 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.313 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.313 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.313 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.313 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.313 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.313 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.314 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.314 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.314 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.314 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.314 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.314 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.314 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.315 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.315 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.315 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.315 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.315 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.315 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.315 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.316 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.316 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.316 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.316 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.316 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.316 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.316 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.317 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.317 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.317 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.317 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.317 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.317 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.317 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.318 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.318 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.318 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.318 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.318 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.318 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.318 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.319 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.319 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.319 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.319 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.319 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.319 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.319 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.320 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.320 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.320 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.320 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.320 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.320 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.320 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.320 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.321 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.321 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.321 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.321 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.321 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.321 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.321 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.322 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.322 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.322 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.322 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.322 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.322 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.323 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.323 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.323 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.323 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.323 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.323 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.323 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.324 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.324 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.324 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.324 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.324 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.324 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.324 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.325 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.325 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.325 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.325 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.325 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.325 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.325 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.325 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.326 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.326 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.326 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.326 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.326 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.326 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.326 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.327 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.327 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.327 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.327 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.327 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.327 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.327 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.328 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.328 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.328 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.328 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.328 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.328 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.328 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.328 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.329 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.329 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.329 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.329 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.329 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.329 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.329 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.330 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.330 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.330 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.330 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.330 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.330 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.330 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.330 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.331 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.331 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.331 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.331 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.331 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.331 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.331 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.332 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.332 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.332 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.332 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.332 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.332 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.332 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.333 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.333 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.333 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.333 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.333 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.333 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.333 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.334 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.334 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.334 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.334 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.334 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.334 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.334 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.335 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.335 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.335 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.335 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.335 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.335 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.335 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.336 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.336 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.336 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.336 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.336 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.336 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.336 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.337 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.337 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.337 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.337 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.337 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.337 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.337 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.338 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.338 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.338 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.338 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.338 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.338 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.338 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.339 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.339 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.339 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.339 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.339 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.339 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.339 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.339 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.340 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.340 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.340 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.340 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.340 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.340 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.340 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.341 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.341 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.341 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.341 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.341 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.341 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.341 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.342 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.342 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.342 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.342 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.342 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.342 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.342 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.343 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.343 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.343 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.343 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.343 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.343 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.343 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.344 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.344 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.344 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.344 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.344 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.344 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.344 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.345 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.345 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.345 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.345 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.345 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.345 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.345 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.345 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.346 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.346 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.346 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.346 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.346 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.346 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.346 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.347 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.347 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.347 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.347 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.347 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.347 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.348 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.348 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.348 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.348 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.348 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.348 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.348 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.349 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.349 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.349 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.349 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.349 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.349 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.349 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.349 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.350 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.350 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.350 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.350 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.350 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.350 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.350 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.351 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.351 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.351 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.351 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.351 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.351 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.351 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.352 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.352 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.352 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.352 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.352 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.352 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.352 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.353 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.353 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.353 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.353 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.353 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.353 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.353 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.354 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.354 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.354 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.354 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.354 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.354 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.354 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.354 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.355 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.355 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.355 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.355 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.355 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.355 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.356 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.356 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.356 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.356 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.357 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.357 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.357 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.357 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.357 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.358 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.358 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.358 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.358 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.358 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.358 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.359 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.359 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.359 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.359 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.359 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.360 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.360 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.360 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.360 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.360 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.360 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.360 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.361 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.361 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.361 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.361 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.361 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.361 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.362 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.362 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.362 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.362 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.363 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.363 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.363 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.363 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.364 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.364 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.364 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.364 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.364 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.364 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.365 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.365 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.365 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.365 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.365 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.365 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.365 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.366 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.366 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.366 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.366 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.366 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.366 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.366 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.366 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.367 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.367 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.367 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.367 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.367 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.367 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.367 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.368 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.368 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.368 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.368 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.368 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.368 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.368 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.369 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.369 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.369 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.369 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.369 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.369 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.369 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.370 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.370 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.370 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.370 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.370 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.370 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.370 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.371 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.371 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.371 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.371 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.371 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.371 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.371 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.371 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.372 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.372 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.372 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.372 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.372 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.372 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.372 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.373 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.373 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.373 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.373 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.373 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.373 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.373 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.374 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.374 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.374 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.378 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.378 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.378 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.379 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.379 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.379 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.379 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.379 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.379 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.380 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.380 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.380 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.380 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.380 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.380 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.380 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.381 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.381 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.381 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.381 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.381 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.381 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.381 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.382 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.382 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.382 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.382 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.382 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.382 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.382 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.383 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.383 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.383 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.383 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.383 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.383 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.383 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.384 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.384 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.384 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.384 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.384 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.384 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.384 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.385 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.385 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.385 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.385 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.385 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.385 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.385 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.386 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.386 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.386 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.386 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.386 2 WARNING oslo_config.cfg [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  9 09:50:42 compute-0 nova_compute[186208]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  9 09:50:42 compute-0 nova_compute[186208]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  9 09:50:42 compute-0 nova_compute[186208]: and ``live_migration_inbound_addr`` respectively.
Oct  9 09:50:42 compute-0 nova_compute[186208]: ).  Its value may be silently ignored in the future.#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.386 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.387 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.387 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.387 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.387 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.387 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.387 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.387 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.388 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.388 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.388 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.388 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.388 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.388 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.389 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.389 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.389 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.389 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.389 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.rbd_secret_uuid        = 286f8bf0-da72-5823-9a4e-ac4457d9e609 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.389 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.389 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.389 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.390 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.390 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.390 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.390 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.390 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.390 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.391 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.391 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.391 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.391 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.391 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.391 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.391 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.392 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.392 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.392 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.392 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.392 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.392 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.392 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.393 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.393 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.393 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.393 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.393 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.393 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.393 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.394 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.394 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.394 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.394 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.394 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.394 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.394 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.395 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.395 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.395 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.395 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.395 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.395 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.395 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.396 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.396 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.396 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.396 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.396 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.396 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.396 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.397 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.397 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.397 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.397 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.397 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.397 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.397 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.398 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.398 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.398 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.398 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.398 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.398 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.398 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.399 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.399 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.399 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.399 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.399 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.399 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.399 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.400 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.400 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.400 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.400 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.400 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.400 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.400 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.401 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.401 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.401 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.401 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.401 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.401 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.401 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.402 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.402 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.402 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.402 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.402 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.402 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.402 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.403 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.403 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.403 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.403 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.403 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.403 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.403 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.404 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.404 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.404 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.404 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.404 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.404 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.404 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.405 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.405 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.405 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.405 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.405 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.405 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.405 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.405 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.406 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.406 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.406 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.406 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.406 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.406 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.407 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.407 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.407 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.407 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.407 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.407 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.407 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.408 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.408 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.408 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.408 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.408 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.408 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.408 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.409 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.409 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.409 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.409 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.409 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.409 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.409 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.410 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.410 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.410 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.410 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.410 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.410 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.410 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.411 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.411 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.411 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.411 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.411 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.411 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.411 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.411 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.412 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.412 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.412 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.412 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.412 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.412 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.413 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.414 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.414 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.414 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.414 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.414 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.415 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.415 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.415 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.415 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.415 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.415 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.415 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.416 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.416 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.416 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.416 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.416 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.416 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.417 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.417 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.417 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.417 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.417 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.417 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.417 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.418 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.418 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.418 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.418 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.418 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.418 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.418 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.418 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.419 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.419 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.419 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.419 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.419 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.419 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.419 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.419 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.420 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.420 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.420 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.420 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.424 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.425 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.425 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.425 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.425 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.425 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.426 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.426 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.426 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.426 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.426 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.426 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.426 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.427 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.427 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.427 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.427 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.427 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.427 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.428 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.428 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.428 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.428 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.428 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.428 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.429 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.429 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.429 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.429 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.429 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.429 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.429 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.430 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.430 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.430 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.430 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.430 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.430 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.430 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.430 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.431 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.431 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.431 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.431 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.431 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.431 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.432 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.432 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.432 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.432 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.432 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.432 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.432 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.432 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.433 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.433 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.433 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.433 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.433 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.433 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.433 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.434 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.434 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.434 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.434 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.434 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.434 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.434 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.435 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.435 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.435 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.435 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.435 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.435 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.435 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.436 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.436 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.436 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.436 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.436 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.436 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.436 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.437 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.437 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.437 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.437 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.437 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.437 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.437 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.438 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.438 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.438 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.438 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.438 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.438 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.438 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.438 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.439 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.439 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.439 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.439 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.439 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.439 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.439 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.440 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.440 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.440 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.440 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.440 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.440 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.440 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.441 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.441 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.441 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.441 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.441 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.441 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.441 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.442 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.442 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.442 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.442 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.442 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.442 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.442 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.442 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.443 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.443 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.443 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.443 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.443 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.443 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.443 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.444 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.444 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.444 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.444 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.444 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.444 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.444 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.444 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.445 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.445 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.445 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.445 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.445 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.445 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.445 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.446 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.446 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.446 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.446 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.446 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.446 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.446 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.447 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.447 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.447 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.447 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.447 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.447 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.447 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.447 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.448 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.448 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.448 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.448 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.448 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.448 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.448 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.449 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.449 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.449 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.449 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.449 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.449 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.449 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.450 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.450 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.450 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.450 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.450 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.450 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.450 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.450 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.451 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.451 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.451 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.451 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.451 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.451 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.451 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.452 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.452 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.452 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.452 2 DEBUG oslo_service.service [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.453 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.468 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.468 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.468 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.469 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  9 09:50:42 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct  9 09:50:42 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.516 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f83373a4430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.518 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f83373a4430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.518 2 INFO nova.virt.libvirt.driver [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  9 09:50:42 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:42 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.532 2 WARNING nova.virt.libvirt.driver [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  9 09:50:42 compute-0 nova_compute[186208]: 2025-10-09 09:50:42.532 2 DEBUG nova.virt.libvirt.volume.mount [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  9 09:50:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:50:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:42.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:50:42 compute-0 python3.9[187171]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.196 2 INFO nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Libvirt host capabilities <capabilities>
Oct  9 09:50:43 compute-0 nova_compute[186208]: 
Oct  9 09:50:43 compute-0 rsyslogd[1243]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <host>
Oct  9 09:50:43 compute-0 rsyslogd[1243]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <uuid>c2ce88da-801c-421f-a8d6-32aab8dfbba9</uuid>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <cpu>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <arch>x86_64</arch>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <microcode version='167776725'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <signature family='25' model='1' stepping='1'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <maxphysaddr mode='emulate' bits='48'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='x2apic'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='tsc-deadline'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='osxsave'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='hypervisor'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='tsc_adjust'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='ospke'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='vaes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='vpclmulqdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='spec-ctrl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='stibp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='arch-capabilities'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='ssbd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='cmp_legacy'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='virt-ssbd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='lbrv'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='tsc-scale'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='vmcb-clean'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='pause-filter'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='pfthreshold'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='vgif'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='rdctl-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='mds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature name='pschange-mc-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <pages unit='KiB' size='4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <pages unit='KiB' size='2048'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <pages unit='KiB' size='1048576'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </cpu>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <power_management>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <suspend_mem/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </power_management>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <iommu support='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <migration_features>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <live/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <uri_transports>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <uri_transport>tcp</uri_transport>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <uri_transport>rdma</uri_transport>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </uri_transports>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </migration_features>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <topology>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <cells num='1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <cell id='0'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:          <memory unit='KiB'>7865152</memory>
Oct  9 09:50:43 compute-0 nova_compute[186208]:          <pages unit='KiB' size='4'>1966288</pages>
Oct  9 09:50:43 compute-0 nova_compute[186208]:          <pages unit='KiB' size='2048'>0</pages>
Oct  9 09:50:43 compute-0 nova_compute[186208]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  9 09:50:43 compute-0 nova_compute[186208]:          <distances>
Oct  9 09:50:43 compute-0 nova_compute[186208]:            <sibling id='0' value='10'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:          </distances>
Oct  9 09:50:43 compute-0 nova_compute[186208]:          <cpus num='4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:          </cpus>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        </cell>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </cells>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </topology>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <cache>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </cache>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <secmodel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model>selinux</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <doi>0</doi>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </secmodel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <secmodel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model>dac</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <doi>0</doi>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </secmodel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </host>
Oct  9 09:50:43 compute-0 nova_compute[186208]: 
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <guest>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <os_type>hvm</os_type>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <arch name='i686'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <wordsize>32</wordsize>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <domain type='qemu'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <domain type='kvm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </arch>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <features>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <pae/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <nonpae/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <apic default='on' toggle='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <cpuselection/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <deviceboot/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <externalSnapshot/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </features>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </guest>
Oct  9 09:50:43 compute-0 nova_compute[186208]: 
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <guest>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <os_type>hvm</os_type>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <arch name='x86_64'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <wordsize>64</wordsize>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <domain type='qemu'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <domain type='kvm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </arch>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <features>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <apic default='on' toggle='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <cpuselection/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <deviceboot/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <externalSnapshot/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </features>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </guest>
Oct  9 09:50:43 compute-0 nova_compute[186208]: 
Oct  9 09:50:43 compute-0 nova_compute[186208]: </capabilities>
Oct  9 09:50:43 compute-0 nova_compute[186208]: #033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.202 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.220 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  9 09:50:43 compute-0 nova_compute[186208]: <domainCapabilities>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <arch>i686</arch>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <vcpu max='240'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <os supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <enum name='firmware'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <loader supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>rom</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pflash</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='readonly'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>yes</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>no</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='secure'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>no</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </loader>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </os>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <cpu>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>on</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>off</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>on</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>off</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 podman[187274]: 2025-10-09 09:50:43.261183743 +0000 UTC m=+0.052216130 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='n270'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </cpu>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>file</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>anonymous</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>memfd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </memoryBacking>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <devices>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <disk supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>disk</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>cdrom</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>floppy</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>lun</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='bus'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>ide</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>fdc</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>scsi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>sata</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </disk>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vnc</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>dbus</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </graphics>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <video supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='modelType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vga</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>cirrus</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>none</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>bochs</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>ramfb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </video>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='mode'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>subsystem</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>default</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>mandatory</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>requisite</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>optional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pci</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>scsi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </hostdev>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <rng supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>random</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>egd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>builtin</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </rng>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='driverType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>path</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>handle</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </filesystem>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>emulator</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>external</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>2.0</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </tpm>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='bus'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </redirdev>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <channel supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pty</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>unix</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </channel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>qemu</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>builtin</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </crypto>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <interface supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>default</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>passt</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </interface>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <panic supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>isa</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>hyperv</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </panic>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </devices>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <features>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <gic supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <sev supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='features'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>relaxed</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vapic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vpindex</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>runtime</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>synic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>stimer</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>reset</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>frequencies</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>ipi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>avic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </hyperv>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </features>
Oct  9 09:50:43 compute-0 nova_compute[186208]: </domainCapabilities>
Oct  9 09:50:43 compute-0 nova_compute[186208]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.224 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  9 09:50:43 compute-0 nova_compute[186208]: <domainCapabilities>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <arch>i686</arch>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <vcpu max='4096'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <os supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <enum name='firmware'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <loader supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>rom</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pflash</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='readonly'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>yes</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>no</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='secure'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>no</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </loader>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </os>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <cpu>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>on</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>off</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>on</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>off</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='n270'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </cpu>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>file</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>anonymous</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>memfd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </memoryBacking>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <devices>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <disk supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>disk</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>cdrom</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>floppy</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>lun</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='bus'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>fdc</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>scsi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>sata</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </disk>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vnc</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>dbus</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </graphics>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <video supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='modelType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vga</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>cirrus</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>none</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>bochs</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>ramfb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </video>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='mode'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>subsystem</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>default</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>mandatory</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>requisite</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>optional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pci</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>scsi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </hostdev>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <rng supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>random</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>egd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>builtin</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </rng>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='driverType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>path</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>handle</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </filesystem>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>emulator</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>external</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>2.0</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </tpm>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='bus'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </redirdev>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <channel supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pty</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>unix</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </channel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>qemu</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>builtin</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </crypto>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <interface supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>default</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>passt</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </interface>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <panic supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>isa</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>hyperv</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </panic>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </devices>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <features>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <gic supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <sev supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='features'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>relaxed</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vapic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vpindex</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>runtime</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>synic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>stimer</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>reset</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>frequencies</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>ipi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>avic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </hyperv>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </features>
Oct  9 09:50:43 compute-0 nova_compute[186208]: </domainCapabilities>
Oct  9 09:50:43 compute-0 nova_compute[186208]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.259 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.262 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  9 09:50:43 compute-0 nova_compute[186208]: <domainCapabilities>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <arch>x86_64</arch>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <vcpu max='240'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <os supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <enum name='firmware'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <loader supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>rom</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pflash</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='readonly'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>yes</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>no</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='secure'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>no</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </loader>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </os>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <cpu>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>on</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>off</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>on</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>off</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='n270'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </cpu>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>file</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>anonymous</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>memfd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </memoryBacking>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <devices>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <disk supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>disk</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>cdrom</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>floppy</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>lun</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='bus'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>ide</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>fdc</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>scsi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>sata</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </disk>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vnc</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>dbus</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </graphics>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <video supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='modelType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vga</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>cirrus</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>none</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>bochs</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>ramfb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </video>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='mode'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>subsystem</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>default</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>mandatory</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>requisite</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>optional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pci</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>scsi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </hostdev>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <rng supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>random</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>egd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>builtin</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </rng>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='driverType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>path</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>handle</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </filesystem>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>emulator</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>external</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>2.0</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </tpm>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='bus'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </redirdev>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <channel supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pty</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>unix</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </channel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>qemu</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>builtin</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </crypto>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <interface supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>default</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>passt</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </interface>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <panic supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>isa</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>hyperv</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </panic>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </devices>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <features>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <gic supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <sev supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='features'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>relaxed</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vapic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vpindex</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>runtime</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>synic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>stimer</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>reset</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>frequencies</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>ipi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>avic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </hyperv>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </features>
Oct  9 09:50:43 compute-0 nova_compute[186208]: </domainCapabilities>
Oct  9 09:50:43 compute-0 nova_compute[186208]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.308 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  9 09:50:43 compute-0 nova_compute[186208]: <domainCapabilities>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <arch>x86_64</arch>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <vcpu max='4096'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <os supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <enum name='firmware'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>efi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <loader supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>rom</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pflash</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='readonly'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>yes</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>no</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='secure'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>yes</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>no</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </loader>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </os>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <cpu>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>on</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>off</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>on</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>off</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xop'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='la57'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='hle'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='n270'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='ss'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </blockers>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </mode>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </cpu>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>file</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>anonymous</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <value>memfd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </memoryBacking>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <devices>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <disk supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>disk</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>cdrom</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>floppy</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>lun</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='bus'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>fdc</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>scsi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>sata</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </disk>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vnc</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>dbus</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </graphics>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <video supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='modelType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vga</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>cirrus</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>none</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>bochs</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>ramfb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </video>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='mode'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>subsystem</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>default</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>mandatory</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>requisite</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>optional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pci</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>scsi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </hostdev>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <rng supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>random</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>egd</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>builtin</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </rng>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='driverType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>path</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>handle</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </filesystem>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>emulator</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>external</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>2.0</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </tpm>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='bus'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>usb</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </redirdev>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <channel supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>pty</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>unix</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </channel>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='type'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>qemu</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>builtin</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </crypto>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <interface supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='backendType'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>default</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>passt</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </interface>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <panic supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='model'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>isa</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>hyperv</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </panic>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </devices>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  <features>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <gic supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <sev supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      <enum name='features'>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>relaxed</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vapic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vpindex</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>runtime</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>synic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>stimer</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>reset</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>frequencies</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>ipi</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>avic</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-0 nova_compute[186208]:      </enum>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    </hyperv>
Oct  9 09:50:43 compute-0 nova_compute[186208]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-0 nova_compute[186208]:  </features>
Oct  9 09:50:43 compute-0 nova_compute[186208]: </domainCapabilities>
Oct  9 09:50:43 compute-0 nova_compute[186208]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.347 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.347 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.347 2 DEBUG nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.347 2 INFO nova.virt.libvirt.host [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Secure Boot support detected#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.348 2 INFO nova.virt.libvirt.driver [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.348 2 INFO nova.virt.libvirt.driver [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.355 2 DEBUG nova.virt.libvirt.driver [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.371 2 INFO nova.virt.node [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Determined node identity f97cf330-2912-473f-81a8-cda2f8811838 from /var/lib/nova/compute_id#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.386 2 WARNING nova.compute.manager [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Compute nodes ['f97cf330-2912-473f-81a8-cda2f8811838'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.401 2 INFO nova.compute.manager [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.419 2 WARNING nova.compute.manager [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.419 2 DEBUG oslo_concurrency.lockutils [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.419 2 DEBUG oslo_concurrency.lockutils [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.419 2 DEBUG oslo_concurrency.lockutils [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.420 2 DEBUG nova.compute.resource_tracker [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.420 2 DEBUG oslo_concurrency.processutils [None req-466f956b-4f39-49f8-ba5c-ab2ee0e6eff2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:43 compute-0 python3.9[187368]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:50:43 compute-0 systemd[1]: Stopping nova_compute container...
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.704 2 DEBUG oslo_concurrency.lockutils [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.705 2 DEBUG oslo_concurrency.lockutils [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:50:43 compute-0 nova_compute[186208]: 2025-10-09 09:50:43.705 2 DEBUG oslo_concurrency.lockutils [None req-e0037acc-9ace-4afc-bf9a-7820c89f6961 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:50:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:50:43 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331903618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:50:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:50:43 compute-0 virtqemud[187041]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  9 09:50:43 compute-0 systemd[1]: libpod-11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7.scope: Deactivated successfully.
Oct  9 09:50:43 compute-0 virtqemud[187041]: hostname: compute-0
Oct  9 09:50:43 compute-0 virtqemud[187041]: End of file while reading data: Input/output error
Oct  9 09:50:43 compute-0 systemd[1]: libpod-11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7.scope: Consumed 2.821s CPU time.
Oct  9 09:50:43 compute-0 conmon[186208]: conmon 11f8d9fb149efab55282 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7.scope/container/memory.events
Oct  9 09:50:43 compute-0 podman[187392]: 2025-10-09 09:50:43.968522748 +0000 UTC m=+0.290873102 container died 11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct  9 09:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7-userdata-shm.mount: Deactivated successfully.
Oct  9 09:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c-merged.mount: Deactivated successfully.
Oct  9 09:50:44 compute-0 podman[187392]: 2025-10-09 09:50:44.079383298 +0000 UTC m=+0.401733653 container cleanup 11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct  9 09:50:44 compute-0 podman[187392]: nova_compute
Oct  9 09:50:44 compute-0 podman[187418]: nova_compute
Oct  9 09:50:44 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  9 09:50:44 compute-0 systemd[1]: Stopped nova_compute container.
Oct  9 09:50:44 compute-0 systemd[1]: Starting nova_compute container...
Oct  9 09:50:44 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0447301f3b5630befd23cfb22b21d988f236b695bb718b8ffb5b4a582edab77c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-0 podman[187427]: 2025-10-09 09:50:44.202992722 +0000 UTC m=+0.062595103 container init 11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251001, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:50:44 compute-0 podman[187427]: 2025-10-09 09:50:44.2086023 +0000 UTC m=+0.068204661 container start 11f8d9fb149efab552822aef2596b2e7646ddb3066789052699103de322e79d7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  9 09:50:44 compute-0 podman[187427]: nova_compute
Oct  9 09:50:44 compute-0 nova_compute[187439]: + sudo -E kolla_set_configs
Oct  9 09:50:44 compute-0 systemd[1]: Started nova_compute container.
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Validating config file
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying service configuration files
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Deleting /etc/ceph
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Creating directory /etc/ceph
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/ceph
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Writing out command to execute
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-0 nova_compute[187439]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-0 nova_compute[187439]: ++ cat /run_command
Oct  9 09:50:44 compute-0 nova_compute[187439]: + CMD=nova-compute
Oct  9 09:50:44 compute-0 nova_compute[187439]: + ARGS=
Oct  9 09:50:44 compute-0 nova_compute[187439]: + sudo kolla_copy_cacerts
Oct  9 09:50:44 compute-0 nova_compute[187439]: Running command: 'nova-compute'
Oct  9 09:50:44 compute-0 nova_compute[187439]: + [[ ! -n '' ]]
Oct  9 09:50:44 compute-0 nova_compute[187439]: + . kolla_extend_start
Oct  9 09:50:44 compute-0 nova_compute[187439]: + echo 'Running command: '\''nova-compute'\'''
Oct  9 09:50:44 compute-0 nova_compute[187439]: + umask 0022
Oct  9 09:50:44 compute-0 nova_compute[187439]: + exec nova-compute
Oct  9 09:50:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:50:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:44.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:50:44 compute-0 python3.9[187603]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  9 09:50:44 compute-0 systemd[1]: Started libpod-conmon-75820f8cfb9efc3f3a845f4daedc993d01d905fb305e8ecdf7bbe1ac010a3dca.scope.
Oct  9 09:50:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64279b3e8e6b60a2ac8cf54bd89421d7e3fd1c48b93c4aba643b462cefb6d3c9/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64279b3e8e6b60a2ac8cf54bd89421d7e3fd1c48b93c4aba643b462cefb6d3c9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64279b3e8e6b60a2ac8cf54bd89421d7e3fd1c48b93c4aba643b462cefb6d3c9/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:45 compute-0 podman[187622]: 2025-10-09 09:50:45.035741985 +0000 UTC m=+0.079137490 container init 75820f8cfb9efc3f3a845f4daedc993d01d905fb305e8ecdf7bbe1ac010a3dca (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible)
Oct  9 09:50:45 compute-0 podman[187622]: 2025-10-09 09:50:45.042393649 +0000 UTC m=+0.085789133 container start 75820f8cfb9efc3f3a845f4daedc993d01d905fb305e8ecdf7bbe1ac010a3dca (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.build-date=20251001, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct  9 09:50:45 compute-0 python3.9[187603]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Applying nova statedir ownership
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  9 09:50:45 compute-0 podman[187637]: 2025-10-09 09:50:45.089616834 +0000 UTC m=+0.057543337 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  9 09:50:45 compute-0 nova_compute_init[187649]: INFO:nova_statedir:Nova statedir ownership complete
Oct  9 09:50:45 compute-0 systemd[1]: libpod-75820f8cfb9efc3f3a845f4daedc993d01d905fb305e8ecdf7bbe1ac010a3dca.scope: Deactivated successfully.
Oct  9 09:50:45 compute-0 podman[187674]: 2025-10-09 09:50:45.131432619 +0000 UTC m=+0.022815316 container died 75820f8cfb9efc3f3a845f4daedc993d01d905fb305e8ecdf7bbe1ac010a3dca (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  9 09:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75820f8cfb9efc3f3a845f4daedc993d01d905fb305e8ecdf7bbe1ac010a3dca-userdata-shm.mount: Deactivated successfully.
Oct  9 09:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-64279b3e8e6b60a2ac8cf54bd89421d7e3fd1c48b93c4aba643b462cefb6d3c9-merged.mount: Deactivated successfully.
Oct  9 09:50:45 compute-0 podman[187674]: 2025-10-09 09:50:45.157519445 +0000 UTC m=+0.048902121 container cleanup 75820f8cfb9efc3f3a845f4daedc993d01d905fb305e8ecdf7bbe1ac010a3dca (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:50:45 compute-0 systemd[1]: libpod-conmon-75820f8cfb9efc3f3a845f4daedc993d01d905fb305e8ecdf7bbe1ac010a3dca.scope: Deactivated successfully.
Oct  9 09:50:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:45 compute-0 systemd-logind[798]: Session 38 logged out. Waiting for processes to exit.
Oct  9 09:50:45 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Oct  9 09:50:45 compute-0 systemd[1]: session-38.scope: Consumed 2min 242ms CPU time.
Oct  9 09:50:45 compute-0 systemd-logind[798]: Removed session 38.
Oct  9 09:50:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:50:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:50:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.014 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.015 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.015 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.016 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  9 09:50:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.223 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.237 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.644 2 INFO nova.virt.driver [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  9 09:50:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.741 2 INFO nova.compute.provider_config [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  9 09:50:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:46.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.748 2 DEBUG oslo_concurrency.lockutils [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.749 2 DEBUG oslo_concurrency.lockutils [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.749 2 DEBUG oslo_concurrency.lockutils [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.785 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.785 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.785 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.785 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.785 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.785 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.785 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.786 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.786 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.786 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.786 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.786 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.786 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.786 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.787 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.787 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.787 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.787 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.787 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.787 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.787 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.788 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.788 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.788 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.788 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.788 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.788 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.789 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.789 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.789 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.789 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.789 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.789 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.790 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.790 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.790 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.790 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.790 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.790 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.791 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.791 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.791 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.791 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.791 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.791 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.791 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.792 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.792 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.792 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.792 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.792 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.792 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.792 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.793 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.793 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.793 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.793 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.793 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.793 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.793 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.794 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.794 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.794 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.794 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.794 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.794 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.794 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.795 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.795 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.795 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.795 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.795 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.795 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.795 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.796 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.796 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.796 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.796 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.796 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.796 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.796 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.797 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.797 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.797 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.797 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.797 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.797 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.797 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.798 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.798 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.798 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.798 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.798 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.798 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.799 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.799 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.799 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.799 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.799 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.799 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.799 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.800 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.800 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.800 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.800 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.800 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.800 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.800 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.801 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.801 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.801 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.801 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.801 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.801 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.801 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.802 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.802 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.802 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.802 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.802 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.803 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.803 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.803 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.803 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.803 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.803 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.803 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.804 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.804 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.804 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.804 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.804 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.804 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.804 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.805 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.805 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.805 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.805 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.805 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.805 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.805 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.806 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.806 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.806 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.806 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.806 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.806 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.806 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.807 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.807 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.807 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.807 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.807 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.807 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.807 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.808 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.808 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.808 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.808 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.808 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.808 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.809 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.809 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.809 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.809 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.809 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.809 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.809 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.810 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.810 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.810 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.810 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.810 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.810 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.810 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.811 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.811 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.811 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.811 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.811 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.811 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.812 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.812 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.812 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.812 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.812 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.812 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.812 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.813 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.813 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.813 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.813 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.813 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.813 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.813 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.814 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.814 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.814 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.814 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.814 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.814 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.814 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.815 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.815 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.815 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.815 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.815 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.815 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.815 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.816 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.816 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.816 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.816 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.816 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.816 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.816 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.817 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.817 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.817 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.817 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.817 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.817 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.818 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.818 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.818 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.818 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.818 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.818 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.818 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.819 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.819 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.819 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.819 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.819 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.819 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.819 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.820 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.820 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.820 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.820 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.820 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.820 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.820 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.821 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.821 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.821 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.821 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.821 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.821 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.822 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.822 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.822 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.822 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.822 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.822 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.822 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.823 2 WARNING oslo_config.cfg [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  9 09:50:46 compute-0 nova_compute[187439]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  9 09:50:46 compute-0 nova_compute[187439]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  9 09:50:46 compute-0 nova_compute[187439]: and ``live_migration_inbound_addr`` respectively.
Oct  9 09:50:46 compute-0 nova_compute[187439]: ).  Its value may be silently ignored in the future.#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.823 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.823 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.823 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.823 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.823 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.824 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.824 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.824 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.824 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.824 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.824 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.825 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.825 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.825 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.825 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.825 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.825 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.825 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.826 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.rbd_secret_uuid        = 286f8bf0-da72-5823-9a4e-ac4457d9e609 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.826 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.826 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.826 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.826 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.826 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.827 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.827 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.827 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.827 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.827 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.827 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.827 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.828 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.828 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.828 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.828 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.828 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.828 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.829 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.829 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.829 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.829 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.829 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.829 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.829 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.830 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.830 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.830 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.830 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.830 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.830 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.830 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.831 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.831 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.831 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.831 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.831 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.831 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.832 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.832 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.832 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.832 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.832 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.832 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.832 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.833 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.833 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.833 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.833 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.833 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.833 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.833 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.834 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.834 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.834 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.834 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.834 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.834 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.834 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.835 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.835 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.835 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.835 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.835 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.835 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.835 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.836 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.836 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.836 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.836 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.836 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.836 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.836 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.837 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.837 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.837 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.837 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.837 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.837 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.837 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.838 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.838 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.838 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.838 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.838 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.838 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.838 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.839 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.839 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.839 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.839 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.839 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.839 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.839 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.840 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.840 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.840 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.840 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.840 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.840 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.840 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.841 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.841 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.841 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.841 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.841 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.841 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.841 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.842 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.842 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.842 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.842 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.842 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.842 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.842 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.843 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.843 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.843 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.843 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.843 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.843 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.844 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.844 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.844 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.844 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.844 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.844 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.845 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.845 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.845 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.845 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.845 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.845 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.845 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.846 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.846 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.846 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.846 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.846 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.846 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.846 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.847 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.847 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.847 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.847 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.847 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.847 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.847 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.848 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.848 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.848 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.848 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.848 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.848 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.848 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.849 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.849 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.849 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.849 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.849 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.849 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.850 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.850 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.850 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.850 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.850 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.850 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.851 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.851 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.851 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.851 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.851 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.852 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.852 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.852 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.852 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.853 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.853 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.853 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.853 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.853 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.853 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.854 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.854 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.854 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.854 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.854 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.854 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.854 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.855 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.855 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.855 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.855 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.855 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.855 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.856 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.856 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.856 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.856 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.856 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.856 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.856 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.857 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.857 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.857 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.857 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.857 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.857 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.858 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.858 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.858 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.858 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.858 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.858 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.858 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.859 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.859 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.859 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.859 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.859 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.859 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.859 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.860 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.860 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.860 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.860 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.860 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.860 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.860 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.861 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.861 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.861 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.861 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.861 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.862 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.862 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.862 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.862 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.862 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.862 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.862 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.863 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.863 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.863 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.863 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.863 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.863 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.863 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.864 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.864 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.864 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.864 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.864 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.864 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.864 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.865 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.865 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.865 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.865 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.865 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.865 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.866 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.866 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.866 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.866 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.866 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.866 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.866 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.867 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.867 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.867 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.867 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.867 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.867 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.867 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.868 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.868 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.868 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.868 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.868 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.868 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.869 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.869 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.869 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.869 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.869 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.869 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.869 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.870 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.870 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.870 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.870 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.870 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.870 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.870 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.871 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.871 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.871 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.871 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.871 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.871 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.872 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.872 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.872 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.872 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.872 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.872 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.872 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.873 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.873 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.873 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.873 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.873 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.873 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.873 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.874 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.874 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.874 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.874 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.874 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.874 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.874 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.875 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.875 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.875 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.875 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.875 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.875 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.876 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.876 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.876 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.876 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.876 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.876 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.876 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.877 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.877 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.877 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.877 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.877 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.877 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.877 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.878 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.878 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.878 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.878 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.878 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.878 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.878 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.879 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.879 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.879 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.879 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.879 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.879 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.879 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.880 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.880 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.880 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.880 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.880 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.880 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.880 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.881 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.881 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.881 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.881 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.881 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.881 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.882 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.882 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.882 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.882 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.882 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.882 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.882 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.883 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.883 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.883 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.883 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.883 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.883 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.883 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.884 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.884 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.884 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.884 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.884 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.884 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.884 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.885 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.885 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.885 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.885 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.885 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.886 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.886 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.886 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.886 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.886 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.886 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.887 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.887 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.887 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.887 2 DEBUG oslo_service.service [None req-b1be80f9-3810-4561-9602-ae5ea8180f97 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.888 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.904 2 INFO nova.virt.node [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Determined node identity f97cf330-2912-473f-81a8-cda2f8811838 from /var/lib/nova/compute_id#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.905 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.907 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.907 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.908 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.923 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f9ef12464c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.927 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f9ef12464c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.928 2 INFO nova.virt.libvirt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.934 2 INFO nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Libvirt host capabilities <capabilities>
Oct  9 09:50:46 compute-0 nova_compute[187439]: 
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <host>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <uuid>c2ce88da-801c-421f-a8d6-32aab8dfbba9</uuid>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <cpu>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <arch>x86_64</arch>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <microcode version='167776725'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <signature family='25' model='1' stepping='1'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <maxphysaddr mode='emulate' bits='48'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='x2apic'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='tsc-deadline'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='osxsave'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='hypervisor'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='tsc_adjust'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='ospke'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='vaes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='vpclmulqdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='spec-ctrl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='stibp'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='arch-capabilities'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='ssbd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='cmp_legacy'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='virt-ssbd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='lbrv'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='tsc-scale'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='vmcb-clean'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='pause-filter'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='pfthreshold'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='vgif'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='rdctl-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='mds-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature name='pschange-mc-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <pages unit='KiB' size='4'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <pages unit='KiB' size='2048'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <pages unit='KiB' size='1048576'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </cpu>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <power_management>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <suspend_mem/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </power_management>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <iommu support='no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <migration_features>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <live/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <uri_transports>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <uri_transport>tcp</uri_transport>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <uri_transport>rdma</uri_transport>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </uri_transports>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </migration_features>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <topology>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <cells num='1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <cell id='0'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:          <memory unit='KiB'>7865152</memory>
Oct  9 09:50:46 compute-0 nova_compute[187439]:          <pages unit='KiB' size='4'>1966288</pages>
Oct  9 09:50:46 compute-0 nova_compute[187439]:          <pages unit='KiB' size='2048'>0</pages>
Oct  9 09:50:46 compute-0 nova_compute[187439]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  9 09:50:46 compute-0 nova_compute[187439]:          <distances>
Oct  9 09:50:46 compute-0 nova_compute[187439]:            <sibling id='0' value='10'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:          </distances>
Oct  9 09:50:46 compute-0 nova_compute[187439]:          <cpus num='4'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:          </cpus>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        </cell>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </cells>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </topology>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <cache>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </cache>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <secmodel>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model>selinux</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <doi>0</doi>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </secmodel>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <secmodel>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model>dac</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <doi>0</doi>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </secmodel>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  </host>
Oct  9 09:50:46 compute-0 nova_compute[187439]: 
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <guest>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <os_type>hvm</os_type>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <arch name='i686'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <wordsize>32</wordsize>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <domain type='qemu'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <domain type='kvm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </arch>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <features>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <pae/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <nonpae/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <apic default='on' toggle='no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <cpuselection/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <deviceboot/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <externalSnapshot/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </features>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  </guest>
Oct  9 09:50:46 compute-0 nova_compute[187439]: 
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <guest>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <os_type>hvm</os_type>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <arch name='x86_64'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <wordsize>64</wordsize>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <domain type='qemu'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <domain type='kvm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </arch>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <features>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <apic default='on' toggle='no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <cpuselection/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <deviceboot/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <externalSnapshot/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </features>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  </guest>
Oct  9 09:50:46 compute-0 nova_compute[187439]: 
Oct  9 09:50:46 compute-0 nova_compute[187439]: </capabilities>
Oct  9 09:50:46 compute-0 nova_compute[187439]: #033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.941 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.944 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  9 09:50:46 compute-0 nova_compute[187439]: <domainCapabilities>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <domain>kvm</domain>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <arch>i686</arch>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <vcpu max='4096'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <iothreads supported='yes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <os supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <enum name='firmware'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <loader supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>rom</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>pflash</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='readonly'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>yes</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>no</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='secure'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>no</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </loader>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  </os>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <cpu>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>on</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>off</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='maximumMigratable'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>on</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>off</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='succor'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <mode name='custom' supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Broadwell'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Cooperlake'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Denverton'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Denverton-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='GraniteRapids'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx10'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx10-128'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx10-256'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx10-512'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Haswell'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Haswell-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Haswell-v3'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='KnightsMill'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Opteron_G4'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Opteron_G5'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='SierraForest'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Snowridge'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='athlon'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='athlon-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='core2duo'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='core2duo-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='coreduo'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='coreduo-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='n270'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='n270-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='phenom'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <blockers model='phenom-v1'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  </cpu>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <memoryBacking supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <enum name='sourceType'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <value>file</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <value>anonymous</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <value>memfd</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  </memoryBacking>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <devices>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <disk supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='diskDevice'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>disk</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>cdrom</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>floppy</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>lun</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='bus'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>fdc</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>scsi</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>usb</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>sata</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </disk>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <graphics supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>vnc</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>egl-headless</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>dbus</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </graphics>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <video supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='modelType'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>vga</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>cirrus</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>none</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>bochs</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>ramfb</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </video>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <hostdev supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='mode'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>subsystem</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='startupPolicy'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>default</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>mandatory</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>requisite</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>optional</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='subsysType'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>usb</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>pci</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>scsi</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='capsType'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='pciBackend'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </hostdev>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <rng supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>random</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>egd</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>builtin</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </rng>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <filesystem supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='driverType'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>path</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>handle</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>virtiofs</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </filesystem>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <tpm supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>tpm-tis</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>tpm-crb</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>emulator</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>external</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='backendVersion'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>2.0</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </tpm>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <redirdev supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='bus'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>usb</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </redirdev>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <channel supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>pty</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>unix</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </channel>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <crypto supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='model'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>qemu</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>builtin</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </crypto>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <interface supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='backendType'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>default</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>passt</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </interface>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <panic supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>isa</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>hyperv</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </panic>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  </devices>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <features>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <gic supported='no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <genid supported='yes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <backup supported='yes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <async-teardown supported='yes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <ps2 supported='yes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <sev supported='no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <sgx supported='no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <hyperv supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='features'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>relaxed</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>vapic</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>spinlocks</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>vpindex</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>runtime</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>synic</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>stimer</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>reset</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>vendor_id</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>frequencies</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>reenlightenment</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>tlbflush</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>ipi</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>avic</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>emsr_bitmap</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>xmm_input</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </hyperv>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <launchSecurity supported='no'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  </features>
Oct  9 09:50:46 compute-0 nova_compute[187439]: </domainCapabilities>
Oct  9 09:50:46 compute-0 nova_compute[187439]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.953 2 DEBUG nova.virt.libvirt.volume.mount [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  9 09:50:46 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.955 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  9 09:50:46 compute-0 nova_compute[187439]: <domainCapabilities>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <domain>kvm</domain>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <arch>i686</arch>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <vcpu max='240'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <iothreads supported='yes'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <os supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <enum name='firmware'/>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <loader supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>rom</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>pflash</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='readonly'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>yes</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>no</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='secure'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>no</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    </loader>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  </os>
Oct  9 09:50:46 compute-0 nova_compute[187439]:  <cpu>
Oct  9 09:50:46 compute-0 nova_compute[187439]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>on</value>
Oct  9 09:50:46 compute-0 nova_compute[187439]:        <value>off</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='maximumMigratable'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>on</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>off</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <vendor>AMD</vendor>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='succor'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <mode name='custom' supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Broadwell'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cooperlake'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Denverton'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Denverton-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amd-psfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='auto-ibrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='stibp-always-on'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amd-psfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='auto-ibrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='stibp-always-on'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amd-psfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='stibp-always-on'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='GraniteRapids'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='prefetchiti'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='prefetchiti'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx10'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx10-128'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx10-256'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx10-512'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='prefetchiti'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Haswell'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Haswell-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Haswell-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='KnightsMill'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512er'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512pf'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512er'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512pf'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Opteron_G4'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Opteron_G5'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tbm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tbm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SierraForest'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cmpccxadd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cmpccxadd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:47.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Snowridge'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='athlon'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='athlon-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='core2duo'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='core2duo-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='coreduo'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='coreduo-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='n270'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='n270-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='phenom'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='phenom-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  </cpu>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <memoryBacking supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <enum name='sourceType'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>file</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>anonymous</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>memfd</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  </memoryBacking>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <devices>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <disk supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='diskDevice'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>disk</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>cdrom</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>floppy</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>lun</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='bus'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>ide</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>fdc</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>scsi</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>usb</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>sata</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio-transitional</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio-non-transitional</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </disk>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <graphics supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>vnc</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>egl-headless</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>dbus</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </graphics>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <video supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='modelType'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>vga</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>cirrus</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>none</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>bochs</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>ramfb</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </video>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <hostdev supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='mode'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>subsystem</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='startupPolicy'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>default</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>mandatory</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>requisite</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>optional</value>
Oct  9 09:50:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:47.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='subsysType'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>usb</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>pci</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>scsi</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='capsType'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='pciBackend'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </hostdev>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <rng supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:47.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio-transitional</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio-non-transitional</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='backendModel'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>random</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>egd</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>builtin</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </rng>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <filesystem supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='driverType'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>path</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>handle</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtiofs</value>
Oct  9 09:50:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:50:47.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </filesystem>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <tpm supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>tpm-tis</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>tpm-crb</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='backendModel'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>emulator</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>external</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='backendVersion'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>2.0</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </tpm>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <redirdev supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='bus'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>usb</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </redirdev>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <channel supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>pty</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>unix</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </channel>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <crypto supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='model'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>qemu</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='backendModel'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>builtin</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </crypto>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <interface supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='backendType'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>default</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>passt</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </interface>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <panic supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>isa</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>hyperv</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </panic>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  </devices>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <features>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <gic supported='no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <genid supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <backup supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <async-teardown supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <ps2 supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <sev supported='no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <sgx supported='no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <hyperv supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='features'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>relaxed</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>vapic</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>spinlocks</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>vpindex</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>runtime</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>synic</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>stimer</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>reset</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>vendor_id</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>frequencies</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>reenlightenment</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>tlbflush</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>ipi</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>avic</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>emsr_bitmap</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>xmm_input</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </hyperv>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <launchSecurity supported='no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  </features>
Oct  9 09:50:47 compute-0 nova_compute[187439]: </domainCapabilities>
Oct  9 09:50:47 compute-0 nova_compute[187439]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:47 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.956 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:47 compute-0 nova_compute[187439]: 2025-10-09 09:50:46.959 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  9 09:50:47 compute-0 nova_compute[187439]: <domainCapabilities>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <domain>kvm</domain>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <arch>x86_64</arch>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <vcpu max='4096'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <iothreads supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <os supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <enum name='firmware'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>efi</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <loader supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>rom</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>pflash</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='readonly'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>yes</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>no</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='secure'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>yes</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>no</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </loader>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  </os>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <cpu>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>on</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>off</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='maximumMigratable'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>on</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>off</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <vendor>AMD</vendor>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='succor'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <mode name='custom' supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Broadwell'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cooperlake'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Denverton'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Denverton-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amd-psfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='auto-ibrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='stibp-always-on'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amd-psfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='auto-ibrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='stibp-always-on'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amd-psfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='stibp-always-on'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='GraniteRapids'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='prefetchiti'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='prefetchiti'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx10'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx10-128'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx10-256'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx10-512'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='prefetchiti'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Haswell'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Haswell-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Haswell-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='KnightsMill'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512er'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512pf'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512er'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512pf'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Opteron_G4'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Opteron_G5'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tbm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fma4'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tbm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xop'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='la57'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SierraForest'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cmpccxadd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-ifma'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cmpccxadd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='hle'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Snowridge'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='mpx'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='athlon'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='athlon-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='core2duo'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='core2duo-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='coreduo'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='coreduo-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='n270'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='n270-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='ss'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='phenom'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <blockers model='phenom-v1'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </blockers>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  </cpu>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <memoryBacking supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <enum name='sourceType'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>file</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>anonymous</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>memfd</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  </memoryBacking>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <devices>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <disk supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='diskDevice'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>disk</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>cdrom</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>floppy</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>lun</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='bus'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>fdc</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>scsi</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>usb</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>sata</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio-transitional</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio-non-transitional</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </disk>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <graphics supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>vnc</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>egl-headless</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>dbus</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </graphics>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <video supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='modelType'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>vga</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>cirrus</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>none</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>bochs</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>ramfb</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </video>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <hostdev supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='mode'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>subsystem</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='startupPolicy'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>default</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>mandatory</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>requisite</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>optional</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='subsysType'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>usb</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>pci</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>scsi</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='capsType'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='pciBackend'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </hostdev>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <rng supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio-transitional</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtio-non-transitional</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='backendModel'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>random</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>egd</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>builtin</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </rng>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <filesystem supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='driverType'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>path</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>handle</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>virtiofs</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </filesystem>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <tpm supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>tpm-tis</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>tpm-crb</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='backendModel'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>emulator</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>external</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='backendVersion'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>2.0</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </tpm>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <redirdev supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='bus'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>usb</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </redirdev>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <channel supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>pty</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>unix</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </channel>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <crypto supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='model'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>qemu</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='backendModel'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>builtin</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </crypto>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <interface supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='backendType'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>default</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>passt</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </interface>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <panic supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='model'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>isa</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>hyperv</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </panic>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  </devices>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <features>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <gic supported='no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <genid supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <backup supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <async-teardown supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <ps2 supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <sev supported='no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <sgx supported='no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <hyperv supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='features'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>relaxed</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>vapic</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>spinlocks</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>vpindex</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>runtime</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>synic</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>stimer</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>reset</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>vendor_id</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>frequencies</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>reenlightenment</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>tlbflush</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>ipi</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>avic</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>emsr_bitmap</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>xmm_input</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </hyperv>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <launchSecurity supported='no'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  </features>
Oct  9 09:50:47 compute-0 nova_compute[187439]: </domainCapabilities>
Oct  9 09:50:47 compute-0 nova_compute[187439]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:47 compute-0 nova_compute[187439]: 2025-10-09 09:50:47.009 2 DEBUG nova.virt.libvirt.host [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  9 09:50:47 compute-0 nova_compute[187439]: <domainCapabilities>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <domain>kvm</domain>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <arch>x86_64</arch>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <vcpu max='240'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <iothreads supported='yes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <os supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <enum name='firmware'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <loader supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='type'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>rom</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>pflash</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='readonly'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>yes</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>no</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='secure'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>no</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </loader>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  </os>
Oct  9 09:50:47 compute-0 nova_compute[187439]:  <cpu>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>on</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>off</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <enum name='maximumMigratable'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>on</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:        <value>off</value>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      </enum>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    </mode>
Oct  9 09:50:47 compute-0 nova_compute[187439]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <vendor>AMD</vendor>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:47 compute-0 nova_compute[187439]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:52:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:04 compute-0 rsyslogd[1243]: imjournal: 2148 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  9 09:52:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:52:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:52:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:52:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:04.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:05.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:52:06 compute-0 podman[189430]: 2025-10-09 09:52:06.59768167 +0000 UTC m=+0.040214763 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  9 09:52:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:06.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:07.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:07.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:07.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:07.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:07.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:52:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:08.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:08.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:08.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:09.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:09 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:09.126Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:09.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:52:10.102 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:52:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:52:10.103 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:52:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:52:10.103 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:52:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:52:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:10.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:11.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 09:52:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1258684450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 09:52:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 09:52:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1258684450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 09:52:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:52:12] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Oct  9 09:52:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:52:12] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Oct  9 09:52:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:52:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:12.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:13.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:52:14 compute-0 podman[189480]: 2025-10-09 09:52:14.590646285 +0000 UTC m=+0.033427433 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent)
Oct  9 09:52:14 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.24728 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  9 09:52:14 compute-0 ceph-mgr[4772]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  9 09:52:14 compute-0 ceph-mgr[4772]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  9 09:52:14 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.24731 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  9 09:52:14 compute-0 ceph-mgr[4772]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  9 09:52:14 compute-0 ceph-mgr[4772]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  9 09:52:14 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.24731 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct  9 09:52:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:14.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:15.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:52:16 compute-0 podman[189500]: 2025-10-09 09:52:16.597225443 +0000 UTC m=+0.034556230 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  9 09:52:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:16.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:17.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:17.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:17.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:17.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:52:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:18.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:18.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:18.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:18.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:18.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:52:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:52:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:52:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:52:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:52:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:52:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:52:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:52:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:52:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:20.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:52:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:21.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:52:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:52:22] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Oct  9 09:52:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:52:22] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Oct  9 09:52:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:52:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:22.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:23.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:52:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:24.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:25.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:52:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:26.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:27.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  9 09:52:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:27.032Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:27.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:27.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000022s ======
Oct  9 09:52:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:27.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000022s
Oct  9 09:52:27 compute-0 podman[189527]: 2025-10-09 09:52:27.631919927 +0000 UTC m=+0.069474832 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  9 09:52:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:52:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:28.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:28.869Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:28.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:28.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:28.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:29.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:52:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:30.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:31.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:52:32] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Oct  9 09:52:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:52:32] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Oct  9 09:52:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:52:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:32.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:52:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:52:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:52:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:34.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:35.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:52:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:36.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:37.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:37.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:37.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:37.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:37 compute-0 podman[189586]: 2025-10-09 09:52:37.61643846 +0000 UTC m=+0.040627492 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:52:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:52:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:38.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:38.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:38.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:38.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:38.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:39.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:52:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000023s ======
Oct  9 09:52:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:40.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000023s
Oct  9 09:52:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:52:42] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Oct  9 09:52:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:52:42] "GET /metrics HTTP/1.1" 200 48419 "" "Prometheus/2.51.0"
Oct  9 09:52:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:52:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:42.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:43.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:52:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:44.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:45 compute-0 podman[189611]: 2025-10-09 09:52:45.601235838 +0000 UTC m=+0.042453096 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:52:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:52:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:46.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:47.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:47.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:47.035Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:47.036Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:47.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:47 compute-0 podman[189629]: 2025-10-09 09:52:47.60167798 +0000 UTC m=+0.042116942 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:52:47 compute-0 nova_compute[187439]: 2025-10-09 09:52:47.640 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-0 nova_compute[187439]: 2025-10-09 09:52:47.653 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-0 nova_compute[187439]: 2025-10-09 09:52:47.653 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-0 nova_compute[187439]: 2025-10-09 09:52:47.653 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.245 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.261 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.261 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.261 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.261 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.261 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.261 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.277 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.277 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.277 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.277 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.277 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:52:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:52:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:52:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:52:48 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:52:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:52:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 09:52:48 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:52:48 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:52:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:52:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:52:48 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:52:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:52:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:52:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:52:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2467923929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.622 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:52:48 compute-0 podman[189831]: 2025-10-09 09:52:48.788334983 +0000 UTC m=+0.030669249 container create 2b7f11b47b72a2ffaa5cf39be949afa24ec592e345c34df0b5d91d01bd3fbed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:52:48 compute-0 systemd[1]: Started libpod-conmon-2b7f11b47b72a2ffaa5cf39be949afa24ec592e345c34df0b5d91d01bd3fbed6.scope.
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.824 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.825 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5049MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.826 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.826 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:52:48 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:52:48 compute-0 podman[189831]: 2025-10-09 09:52:48.836824403 +0000 UTC m=+0.079158688 container init 2b7f11b47b72a2ffaa5cf39be949afa24ec592e345c34df0b5d91d01bd3fbed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:52:48 compute-0 podman[189831]: 2025-10-09 09:52:48.842714473 +0000 UTC m=+0.085048749 container start 2b7f11b47b72a2ffaa5cf39be949afa24ec592e345c34df0b5d91d01bd3fbed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:52:48 compute-0 podman[189831]: 2025-10-09 09:52:48.843968147 +0000 UTC m=+0.086302413 container attach 2b7f11b47b72a2ffaa5cf39be949afa24ec592e345c34df0b5d91d01bd3fbed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:52:48 compute-0 dazzling_noyce[189845]: 167 167
Oct  9 09:52:48 compute-0 systemd[1]: libpod-2b7f11b47b72a2ffaa5cf39be949afa24ec592e345c34df0b5d91d01bd3fbed6.scope: Deactivated successfully.
Oct  9 09:52:48 compute-0 podman[189831]: 2025-10-09 09:52:48.845995192 +0000 UTC m=+0.088329466 container died 2b7f11b47b72a2ffaa5cf39be949afa24ec592e345c34df0b5d91d01bd3fbed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fca726df730a07da7c758d086d495313b03051d5f46f0efce175362f8e10ca3b-merged.mount: Deactivated successfully.
Oct  9 09:52:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:48.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:48 compute-0 podman[189831]: 2025-10-09 09:52:48.869076867 +0000 UTC m=+0.111411142 container remove 2b7f11b47b72a2ffaa5cf39be949afa24ec592e345c34df0b5d91d01bd3fbed6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:52:48 compute-0 podman[189831]: 2025-10-09 09:52:48.775418871 +0000 UTC m=+0.017753166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:52:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:48.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:48.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:48.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.874 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.875 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:52:48 compute-0 systemd[1]: libpod-conmon-2b7f11b47b72a2ffaa5cf39be949afa24ec592e345c34df0b5d91d01bd3fbed6.scope: Deactivated successfully.
Oct  9 09:52:48 compute-0 nova_compute[187439]: 2025-10-09 09:52:48.887 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:52:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:48.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:48 compute-0 podman[189868]: 2025-10-09 09:52:48.993242908 +0000 UTC m=+0.033316452 container create 2616f162c1701050e1505c9a5cff8c854fcf081cf12191fb2d9aee846309b209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:52:49 compute-0 systemd[1]: Started libpod-conmon-2616f162c1701050e1505c9a5cff8c854fcf081cf12191fb2d9aee846309b209.scope.
Oct  9 09:52:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3ed0d5f66169dddf03e69ecaa3ecacb4f32836d3a4631058b2208885db1de6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3ed0d5f66169dddf03e69ecaa3ecacb4f32836d3a4631058b2208885db1de6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3ed0d5f66169dddf03e69ecaa3ecacb4f32836d3a4631058b2208885db1de6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3ed0d5f66169dddf03e69ecaa3ecacb4f32836d3a4631058b2208885db1de6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3ed0d5f66169dddf03e69ecaa3ecacb4f32836d3a4631058b2208885db1de6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:49 compute-0 podman[189868]: 2025-10-09 09:52:49.056832579 +0000 UTC m=+0.096906133 container init 2616f162c1701050e1505c9a5cff8c854fcf081cf12191fb2d9aee846309b209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_greider, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:52:49 compute-0 podman[189868]: 2025-10-09 09:52:49.061968266 +0000 UTC m=+0.102041810 container start 2616f162c1701050e1505c9a5cff8c854fcf081cf12191fb2d9aee846309b209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  9 09:52:49 compute-0 podman[189868]: 2025-10-09 09:52:49.065438552 +0000 UTC m=+0.105512116 container attach 2616f162c1701050e1505c9a5cff8c854fcf081cf12191fb2d9aee846309b209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 09:52:49 compute-0 podman[189868]: 2025-10-09 09:52:48.976552216 +0000 UTC m=+0.016625780 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:52:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:52:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:52:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:52:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3760170036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:52:49 compute-0 nova_compute[187439]: 2025-10-09 09:52:49.251 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:52:49 compute-0 nova_compute[187439]: 2025-10-09 09:52:49.255 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:52:49 compute-0 nova_compute[187439]: 2025-10-09 09:52:49.268 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:52:49 compute-0 nova_compute[187439]: 2025-10-09 09:52:49.270 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:52:49 compute-0 nova_compute[187439]: 2025-10-09 09:52:49.270 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:52:49 compute-0 elastic_greider[189900]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:52:49 compute-0 elastic_greider[189900]: --> All data devices are unavailable
Oct  9 09:52:49 compute-0 systemd[1]: libpod-2616f162c1701050e1505c9a5cff8c854fcf081cf12191fb2d9aee846309b209.scope: Deactivated successfully.
Oct  9 09:52:49 compute-0 conmon[189900]: conmon 2616f162c1701050e150 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2616f162c1701050e1505c9a5cff8c854fcf081cf12191fb2d9aee846309b209.scope/container/memory.events
Oct  9 09:52:49 compute-0 podman[189868]: 2025-10-09 09:52:49.336804918 +0000 UTC m=+0.376878463 container died 2616f162c1701050e1505c9a5cff8c854fcf081cf12191fb2d9aee846309b209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a3ed0d5f66169dddf03e69ecaa3ecacb4f32836d3a4631058b2208885db1de6-merged.mount: Deactivated successfully.
Oct  9 09:52:49 compute-0 podman[189868]: 2025-10-09 09:52:49.357527866 +0000 UTC m=+0.397601410 container remove 2616f162c1701050e1505c9a5cff8c854fcf081cf12191fb2d9aee846309b209 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_greider, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:52:49 compute-0 systemd[1]: libpod-conmon-2616f162c1701050e1505c9a5cff8c854fcf081cf12191fb2d9aee846309b209.scope: Deactivated successfully.
Oct  9 09:52:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:49.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:52:49
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['.rgw.root', 'backups', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'vms', '.nfs', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images']
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:52:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:52:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:52:49 compute-0 podman[190008]: 2025-10-09 09:52:49.762104773 +0000 UTC m=+0.026893738 container create a0d294dd9f8298965499804f4028c161b39ed7c9a4bb07782a443ba687d9f154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:52:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:52:49 compute-0 systemd[1]: Started libpod-conmon-a0d294dd9f8298965499804f4028c161b39ed7c9a4bb07782a443ba687d9f154.scope.
Oct  9 09:52:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:52:49 compute-0 podman[190008]: 2025-10-09 09:52:49.81818504 +0000 UTC m=+0.082973995 container init a0d294dd9f8298965499804f4028c161b39ed7c9a4bb07782a443ba687d9f154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:52:49 compute-0 podman[190008]: 2025-10-09 09:52:49.823136299 +0000 UTC m=+0.087925254 container start a0d294dd9f8298965499804f4028c161b39ed7c9a4bb07782a443ba687d9f154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:52:49 compute-0 podman[190008]: 2025-10-09 09:52:49.824246674 +0000 UTC m=+0.089035629 container attach a0d294dd9f8298965499804f4028c161b39ed7c9a4bb07782a443ba687d9f154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:52:49 compute-0 confident_grothendieck[190021]: 167 167
Oct  9 09:52:49 compute-0 systemd[1]: libpod-a0d294dd9f8298965499804f4028c161b39ed7c9a4bb07782a443ba687d9f154.scope: Deactivated successfully.
Oct  9 09:52:49 compute-0 podman[190008]: 2025-10-09 09:52:49.827077944 +0000 UTC m=+0.091866899 container died a0d294dd9f8298965499804f4028c161b39ed7c9a4bb07782a443ba687d9f154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_grothendieck, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5619c61a9df2c16e57e16cc174bac3f066d2decabc75bb78f665df6abcec33d5-merged.mount: Deactivated successfully.
Oct  9 09:52:49 compute-0 podman[190008]: 2025-10-09 09:52:49.844104138 +0000 UTC m=+0.108893094 container remove a0d294dd9f8298965499804f4028c161b39ed7c9a4bb07782a443ba687d9f154 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:52:49 compute-0 podman[190008]: 2025-10-09 09:52:49.750485857 +0000 UTC m=+0.015274822 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:52:49 compute-0 systemd[1]: libpod-conmon-a0d294dd9f8298965499804f4028c161b39ed7c9a4bb07782a443ba687d9f154.scope: Deactivated successfully.
Oct  9 09:52:49 compute-0 podman[190043]: 2025-10-09 09:52:49.967411699 +0000 UTC m=+0.029682317 container create b2dead2735760bdafd5cde1763487457c0de5a50acc5f06ed30044a600677f63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Oct  9 09:52:49 compute-0 systemd[1]: Started libpod-conmon-b2dead2735760bdafd5cde1763487457c0de5a50acc5f06ed30044a600677f63.scope.
Oct  9 09:52:50 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d65bc294cfa915de116ab8b98ee6610cddbbbc51f691a560370f2f4c38e249/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d65bc294cfa915de116ab8b98ee6610cddbbbc51f691a560370f2f4c38e249/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d65bc294cfa915de116ab8b98ee6610cddbbbc51f691a560370f2f4c38e249/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d65bc294cfa915de116ab8b98ee6610cddbbbc51f691a560370f2f4c38e249/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:50 compute-0 podman[190043]: 2025-10-09 09:52:50.023101159 +0000 UTC m=+0.085371778 container init b2dead2735760bdafd5cde1763487457c0de5a50acc5f06ed30044a600677f63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_northcutt, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:52:50 compute-0 podman[190043]: 2025-10-09 09:52:50.027439603 +0000 UTC m=+0.089710221 container start b2dead2735760bdafd5cde1763487457c0de5a50acc5f06ed30044a600677f63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_northcutt, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 09:52:50 compute-0 podman[190043]: 2025-10-09 09:52:50.02848226 +0000 UTC m=+0.090752877 container attach b2dead2735760bdafd5cde1763487457c0de5a50acc5f06ed30044a600677f63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:52:50 compute-0 podman[190043]: 2025-10-09 09:52:49.956258913 +0000 UTC m=+0.018529551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]: {
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:    "1": [
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:        {
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "devices": [
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "/dev/loop3"
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            ],
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "lv_name": "ceph_lv0",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "lv_size": "21470642176",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "name": "ceph_lv0",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "tags": {
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.cluster_name": "ceph",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.crush_device_class": "",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.encrypted": "0",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.osd_id": "1",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.type": "block",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.vdo": "0",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:                "ceph.with_tpm": "0"
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            },
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "type": "block",
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:            "vg_name": "ceph_vg0"
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:        }
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]:    ]
Oct  9 09:52:50 compute-0 recursing_northcutt[190056]: }
Oct  9 09:52:50 compute-0 systemd[1]: libpod-b2dead2735760bdafd5cde1763487457c0de5a50acc5f06ed30044a600677f63.scope: Deactivated successfully.
Oct  9 09:52:50 compute-0 conmon[190056]: conmon b2dead2735760bdafd5c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2dead2735760bdafd5cde1763487457c0de5a50acc5f06ed30044a600677f63.scope/container/memory.events
Oct  9 09:52:50 compute-0 podman[190043]: 2025-10-09 09:52:50.269692927 +0000 UTC m=+0.331963545 container died b2dead2735760bdafd5cde1763487457c0de5a50acc5f06ed30044a600677f63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:52:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0d65bc294cfa915de116ab8b98ee6610cddbbbc51f691a560370f2f4c38e249-merged.mount: Deactivated successfully.
Oct  9 09:52:50 compute-0 podman[190043]: 2025-10-09 09:52:50.295529339 +0000 UTC m=+0.357799957 container remove b2dead2735760bdafd5cde1763487457c0de5a50acc5f06ed30044a600677f63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 09:52:50 compute-0 systemd[1]: libpod-conmon-b2dead2735760bdafd5cde1763487457c0de5a50acc5f06ed30044a600677f63.scope: Deactivated successfully.
Oct  9 09:52:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 09:52:50 compute-0 podman[190159]: 2025-10-09 09:52:50.709644858 +0000 UTC m=+0.026608990 container create d4f16723c3ae83a24099fd6c5db170e1986c86784d6fcdd162c5055f80b5c04c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:52:50 compute-0 systemd[1]: Started libpod-conmon-d4f16723c3ae83a24099fd6c5db170e1986c86784d6fcdd162c5055f80b5c04c.scope.
Oct  9 09:52:50 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:52:50 compute-0 podman[190159]: 2025-10-09 09:52:50.755786119 +0000 UTC m=+0.072750262 container init d4f16723c3ae83a24099fd6c5db170e1986c86784d6fcdd162c5055f80b5c04c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mcnulty, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:52:50 compute-0 podman[190159]: 2025-10-09 09:52:50.760052566 +0000 UTC m=+0.077016709 container start d4f16723c3ae83a24099fd6c5db170e1986c86784d6fcdd162c5055f80b5c04c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Oct  9 09:52:50 compute-0 podman[190159]: 2025-10-09 09:52:50.761064193 +0000 UTC m=+0.078028337 container attach d4f16723c3ae83a24099fd6c5db170e1986c86784d6fcdd162c5055f80b5c04c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:52:50 compute-0 hungry_mcnulty[190172]: 167 167
Oct  9 09:52:50 compute-0 systemd[1]: libpod-d4f16723c3ae83a24099fd6c5db170e1986c86784d6fcdd162c5055f80b5c04c.scope: Deactivated successfully.
Oct  9 09:52:50 compute-0 podman[190159]: 2025-10-09 09:52:50.763849487 +0000 UTC m=+0.080813651 container died d4f16723c3ae83a24099fd6c5db170e1986c86784d6fcdd162c5055f80b5c04c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mcnulty, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:52:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7d4455bcfd109a53067623a0f8d17e9bed78c25d0a64edbda1854e9817d35a6-merged.mount: Deactivated successfully.
Oct  9 09:52:50 compute-0 podman[190159]: 2025-10-09 09:52:50.781551788 +0000 UTC m=+0.098515931 container remove d4f16723c3ae83a24099fd6c5db170e1986c86784d6fcdd162c5055f80b5c04c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  9 09:52:50 compute-0 podman[190159]: 2025-10-09 09:52:50.698927613 +0000 UTC m=+0.015891776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:52:50 compute-0 systemd[1]: libpod-conmon-d4f16723c3ae83a24099fd6c5db170e1986c86784d6fcdd162c5055f80b5c04c.scope: Deactivated successfully.
Oct  9 09:52:50 compute-0 podman[190194]: 2025-10-09 09:52:50.900652563 +0000 UTC m=+0.027497075 container create 633b2a7b1768e30b301d9be3327816ae90f6496e2f2c459f1498487c31ec0415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:52:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:50.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:50 compute-0 systemd[1]: Started libpod-conmon-633b2a7b1768e30b301d9be3327816ae90f6496e2f2c459f1498487c31ec0415.scope.
Oct  9 09:52:50 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644aeea0798d10ceb795fc821c4078260bad9033ce5ba1f488b4063245d77b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644aeea0798d10ceb795fc821c4078260bad9033ce5ba1f488b4063245d77b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644aeea0798d10ceb795fc821c4078260bad9033ce5ba1f488b4063245d77b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f644aeea0798d10ceb795fc821c4078260bad9033ce5ba1f488b4063245d77b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:52:50 compute-0 podman[190194]: 2025-10-09 09:52:50.959588797 +0000 UTC m=+0.086433319 container init 633b2a7b1768e30b301d9be3327816ae90f6496e2f2c459f1498487c31ec0415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:52:50 compute-0 podman[190194]: 2025-10-09 09:52:50.964270187 +0000 UTC m=+0.091114690 container start 633b2a7b1768e30b301d9be3327816ae90f6496e2f2c459f1498487c31ec0415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:52:50 compute-0 podman[190194]: 2025-10-09 09:52:50.966586396 +0000 UTC m=+0.093430918 container attach 633b2a7b1768e30b301d9be3327816ae90f6496e2f2c459f1498487c31ec0415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:52:50 compute-0 podman[190194]: 2025-10-09 09:52:50.889134357 +0000 UTC m=+0.015978880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:52:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:51 compute-0 friendly_tharp[190230]: {}
Oct  9 09:52:51 compute-0 lvm[190309]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:52:51 compute-0 lvm[190309]: VG ceph_vg0 finished
Oct  9 09:52:51 compute-0 systemd[1]: libpod-633b2a7b1768e30b301d9be3327816ae90f6496e2f2c459f1498487c31ec0415.scope: Deactivated successfully.
Oct  9 09:52:51 compute-0 podman[190194]: 2025-10-09 09:52:51.485006452 +0000 UTC m=+0.611850965 container died 633b2a7b1768e30b301d9be3327816ae90f6496e2f2c459f1498487c31ec0415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tharp, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 09:52:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:51.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f644aeea0798d10ceb795fc821c4078260bad9033ce5ba1f488b4063245d77b2-merged.mount: Deactivated successfully.
Oct  9 09:52:51 compute-0 podman[190194]: 2025-10-09 09:52:51.509743701 +0000 UTC m=+0.636588203 container remove 633b2a7b1768e30b301d9be3327816ae90f6496e2f2c459f1498487c31ec0415 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_tharp, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:52:51 compute-0 systemd[1]: libpod-conmon-633b2a7b1768e30b301d9be3327816ae90f6496e2f2c459f1498487c31ec0415.scope: Deactivated successfully.
Oct  9 09:52:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:52:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:52:51 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:52:52] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Oct  9 09:52:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:52:52] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Oct  9 09:52:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:52:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:52:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:52.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:52:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:53.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 09:52:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:55.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:52:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:56.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:52:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:57.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:57.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:57.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:57.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:57.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 09:52:58 compute-0 podman[190353]: 2025-10-09 09:52:58.666760675 +0000 UTC m=+0.105169199 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct  9 09:52:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:58.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:58.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:58.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:52:58.870Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:52:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:58.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:52:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:52:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:52:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:59.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:00.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:01.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:02 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:53:02.048 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:53:02 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:53:02.050 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:53:02 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:53:02.052 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:53:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:02] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Oct  9 09:53:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:02] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Oct  9 09:53:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:53:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:02.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:03.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:53:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:53:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:04.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:05.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:53:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:06.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:07.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:07.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:07.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:07.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:07.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:08 compute-0 podman[190389]: 2025-10-09 09:53:08.621500707 +0000 UTC m=+0.057651532 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:53:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:08.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:08.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:08.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:08.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:53:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:08.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:53:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:53:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:09.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:53:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:53:10.104 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:53:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:53:10.105 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:53:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:53:10.106 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:53:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:10.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:11.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:12] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Oct  9 09:53:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:12] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Oct  9 09:53:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:53:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:12.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:13.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:14.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:15.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.260759) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596260818, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1198, "num_deletes": 256, "total_data_size": 2135304, "memory_usage": 2172048, "flush_reason": "Manual Compaction"}
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596268365, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2067926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17952, "largest_seqno": 19149, "table_properties": {"data_size": 2062313, "index_size": 2944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11873, "raw_average_key_size": 18, "raw_value_size": 2050859, "raw_average_value_size": 3270, "num_data_blocks": 132, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003492, "oldest_key_time": 1760003492, "file_creation_time": 1760003596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 7623 microseconds, and 6215 cpu microseconds.
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268391) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2067926 bytes OK
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268406) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268762) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268772) EVENT_LOG_v1 {"time_micros": 1760003596268769, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268785) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2129914, prev total WAL file size 2129914, number of live WAL files 2.
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.269316) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2019KB)], [38(11MB)]
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596269338, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14226662, "oldest_snapshot_seqno": -1}
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5013 keys, 13754813 bytes, temperature: kUnknown
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596308547, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13754813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13719627, "index_size": 21572, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126837, "raw_average_key_size": 25, "raw_value_size": 13626760, "raw_average_value_size": 2718, "num_data_blocks": 890, "num_entries": 5013, "num_filter_entries": 5013, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760003596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.308712) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13754813 bytes
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309177) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 362.4 rd, 350.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.6 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(13.5) write-amplify(6.7) OK, records in: 5539, records dropped: 526 output_compression: NoCompression
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309192) EVENT_LOG_v1 {"time_micros": 1760003596309185, "job": 18, "event": "compaction_finished", "compaction_time_micros": 39259, "compaction_time_cpu_micros": 20342, "output_level": 6, "num_output_files": 1, "total_output_size": 13754813, "num_input_records": 5539, "num_output_records": 5013, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596309498, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596311028, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.269272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.311049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.311051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.311052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.311054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:53:16.311055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:53:16 compute-0 podman[190438]: 2025-10-09 09:53:16.630654545 +0000 UTC m=+0.072191547 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:53:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:16.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:17.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:17.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:17.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:17.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:17.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:53:18 compute-0 podman[190456]: 2025-10-09 09:53:18.599735052 +0000 UTC m=+0.042502299 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  9 09:53:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:18.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:18.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:18.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:18.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:18.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:19.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:53:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:53:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:53:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:53:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:53:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:53:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:53:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:53:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:53:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:20.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:21.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:22] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Oct  9 09:53:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:22] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Oct  9 09:53:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:53:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:23.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 09:53:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:24.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:25.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:53:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:26.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:27.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:27.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:27.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:27.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:27.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:28.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:28.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:28.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:28.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:28.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:29.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:29 compute-0 podman[190483]: 2025-10-09 09:53:29.616713864 +0000 UTC m=+0.056649421 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:53:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:30.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:31.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:32] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Oct  9 09:53:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:32] "GET /metrics HTTP/1.1" 200 48421 "" "Prometheus/2.51.0"
Oct  9 09:53:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:53:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:32.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:33.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:53:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:53:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:34.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:35.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:53:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:36.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:37.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:37.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:37.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:37.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:37.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:38.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:38.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:38.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:38.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:38.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:39.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:39 compute-0 podman[190541]: 2025-10-09 09:53:39.598747248 +0000 UTC m=+0.039385409 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 09:53:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:53:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4290 writes, 19K keys, 4290 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 4290 writes, 4290 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1494 writes, 6098 keys, 1494 commit groups, 1.0 writes per commit group, ingest: 11.12 MB, 0.02 MB/s#012Interval WAL: 1494 writes, 1494 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    407.3      0.08              0.06         9    0.009       0      0       0.0       0.0#012  L6      1/0   13.12 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    508.8    431.3      0.24              0.15         8    0.030     36K   4311       0.0       0.0#012 Sum      1/0   13.12 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    383.9    425.4      0.32              0.21        17    0.019     36K   4311       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    364.8    380.8      0.13              0.08         6    0.022     16K   2041       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    508.8    431.3      0.24              0.15         8    0.030     36K   4311       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    415.0      0.08              0.06         8    0.009       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.031, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.11 MB/s write, 0.12 GB read, 0.10 MB/s read, 0.3 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557b3d66b350#2 capacity: 304.00 MB usage: 5.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 9.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(352,5.22 MB,1.7163%) FilterBlock(18,113.80 KB,0.0365558%) IndexBlock(18,218.97 KB,0.070341%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 09:53:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:40.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:41.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:42] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Oct  9 09:53:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:42] "GET /metrics HTTP/1.1" 200 48425 "" "Prometheus/2.51.0"
Oct  9 09:53:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:53:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:42.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:43.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:45.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:45.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:53:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:47.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:47.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:47.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:47.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:47.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:47.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:47 compute-0 podman[190566]: 2025-10-09 09:53:47.601974619 +0000 UTC m=+0.038861546 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.254 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.255 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.255 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.255 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.268 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.268 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.268 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.268 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.268 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.283 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.283 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.284 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.284 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.284 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:53:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:53:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4016438260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.673 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:53:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:48.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:48.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:48.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:48.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.903 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.905 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5100MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.905 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.906 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.960 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.961 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:53:48 compute-0 nova_compute[187439]: 2025-10-09 09:53:48.981 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:53:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:49.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:53:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/325359725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:53:49 compute-0 nova_compute[187439]: 2025-10-09 09:53:49.344 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:53:49 compute-0 nova_compute[187439]: 2025-10-09 09:53:49.348 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:53:49 compute-0 nova_compute[187439]: 2025-10-09 09:53:49.364 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:53:49 compute-0 nova_compute[187439]: 2025-10-09 09:53:49.366 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:53:49 compute-0 nova_compute[187439]: 2025-10-09 09:53:49.366 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:53:49
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['.nfs', 'images', '.rgw.root', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'volumes']
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:53:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:53:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3354045688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:53:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:49.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:53:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:53:49 compute-0 podman[190628]: 2025-10-09 09:53:49.606898207 +0000 UTC m=+0.046527833 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:53:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:53:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:50 compute-0 nova_compute[187439]: 2025-10-09 09:53:50.344 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:50 compute-0 nova_compute[187439]: 2025-10-09 09:53:50.345 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:50 compute-0 nova_compute[187439]: 2025-10-09 09:53:50.345 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:50 compute-0 nova_compute[187439]: 2025-10-09 09:53:50.345 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:53:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:53:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:53:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:51.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:53:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:51.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:53:52 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:53:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:53:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:53:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:53:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:53:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:52] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:53:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:53:52] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:53:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:53:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:53:52 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:53:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:53:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:53:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:53:52 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:53:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:53:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:52 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:53:52 compute-0 podman[190835]: 2025-10-09 09:53:52.710437997 +0000 UTC m=+0.030094763 container create c8a7e42a619dadd23ea4843e3ff98d57d0689a068b245c6eb02cc805a4f826bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:53:52 compute-0 systemd[1]: Started libpod-conmon-c8a7e42a619dadd23ea4843e3ff98d57d0689a068b245c6eb02cc805a4f826bd.scope.
Oct  9 09:53:52 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:53:52 compute-0 podman[190835]: 2025-10-09 09:53:52.771123288 +0000 UTC m=+0.090780073 container init c8a7e42a619dadd23ea4843e3ff98d57d0689a068b245c6eb02cc805a4f826bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 09:53:52 compute-0 podman[190835]: 2025-10-09 09:53:52.775851485 +0000 UTC m=+0.095508259 container start c8a7e42a619dadd23ea4843e3ff98d57d0689a068b245c6eb02cc805a4f826bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shtern, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:53:52 compute-0 zen_shtern[190848]: 167 167
Oct  9 09:53:52 compute-0 systemd[1]: libpod-c8a7e42a619dadd23ea4843e3ff98d57d0689a068b245c6eb02cc805a4f826bd.scope: Deactivated successfully.
Oct  9 09:53:52 compute-0 conmon[190848]: conmon c8a7e42a619dadd23ea4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8a7e42a619dadd23ea4843e3ff98d57d0689a068b245c6eb02cc805a4f826bd.scope/container/memory.events
Oct  9 09:53:52 compute-0 podman[190835]: 2025-10-09 09:53:52.784209666 +0000 UTC m=+0.103866442 container attach c8a7e42a619dadd23ea4843e3ff98d57d0689a068b245c6eb02cc805a4f826bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shtern, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:53:52 compute-0 podman[190835]: 2025-10-09 09:53:52.784945083 +0000 UTC m=+0.104601858 container died c8a7e42a619dadd23ea4843e3ff98d57d0689a068b245c6eb02cc805a4f826bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shtern, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:53:52 compute-0 podman[190835]: 2025-10-09 09:53:52.698484926 +0000 UTC m=+0.018141721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:53:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b2ac13275271d19dd331996f3d9c73195ef59f7e0600915d525a14a6111b2f1-merged.mount: Deactivated successfully.
Oct  9 09:53:52 compute-0 podman[190835]: 2025-10-09 09:53:52.804679187 +0000 UTC m=+0.124335963 container remove c8a7e42a619dadd23ea4843e3ff98d57d0689a068b245c6eb02cc805a4f826bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 09:53:52 compute-0 systemd[1]: libpod-conmon-c8a7e42a619dadd23ea4843e3ff98d57d0689a068b245c6eb02cc805a4f826bd.scope: Deactivated successfully.
Oct  9 09:53:52 compute-0 podman[190870]: 2025-10-09 09:53:52.93185768 +0000 UTC m=+0.030881387 container create 87c836cc3f95386edec25c767e6b58452685891485c77b0dde78d8f94dccc8b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:53:52 compute-0 systemd[1]: Started libpod-conmon-87c836cc3f95386edec25c767e6b58452685891485c77b0dde78d8f94dccc8b5.scope.
Oct  9 09:53:52 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58090c0f704093b81f4fa43453bd426a49d2da7ce1f91345f4638fc7338307cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58090c0f704093b81f4fa43453bd426a49d2da7ce1f91345f4638fc7338307cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58090c0f704093b81f4fa43453bd426a49d2da7ce1f91345f4638fc7338307cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58090c0f704093b81f4fa43453bd426a49d2da7ce1f91345f4638fc7338307cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58090c0f704093b81f4fa43453bd426a49d2da7ce1f91345f4638fc7338307cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:52 compute-0 podman[190870]: 2025-10-09 09:53:52.994259506 +0000 UTC m=+0.093283213 container init 87c836cc3f95386edec25c767e6b58452685891485c77b0dde78d8f94dccc8b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:53:52 compute-0 podman[190870]: 2025-10-09 09:53:52.999594206 +0000 UTC m=+0.098617903 container start 87c836cc3f95386edec25c767e6b58452685891485c77b0dde78d8f94dccc8b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 09:53:53 compute-0 podman[190870]: 2025-10-09 09:53:53.000877065 +0000 UTC m=+0.099900772 container attach 87c836cc3f95386edec25c767e6b58452685891485c77b0dde78d8f94dccc8b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:53:53 compute-0 podman[190870]: 2025-10-09 09:53:52.919727966 +0000 UTC m=+0.018751673 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:53:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:53.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:53 compute-0 keen_feistel[190882]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:53:53 compute-0 keen_feistel[190882]: --> All data devices are unavailable
Oct  9 09:53:53 compute-0 systemd[1]: libpod-87c836cc3f95386edec25c767e6b58452685891485c77b0dde78d8f94dccc8b5.scope: Deactivated successfully.
Oct  9 09:53:53 compute-0 conmon[190882]: conmon 87c836cc3f95386edec2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87c836cc3f95386edec25c767e6b58452685891485c77b0dde78d8f94dccc8b5.scope/container/memory.events
Oct  9 09:53:53 compute-0 podman[190870]: 2025-10-09 09:53:53.279713202 +0000 UTC m=+0.378736899 container died 87c836cc3f95386edec25c767e6b58452685891485c77b0dde78d8f94dccc8b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feistel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Oct  9 09:53:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-58090c0f704093b81f4fa43453bd426a49d2da7ce1f91345f4638fc7338307cc-merged.mount: Deactivated successfully.
Oct  9 09:53:53 compute-0 podman[190870]: 2025-10-09 09:53:53.30231735 +0000 UTC m=+0.401341047 container remove 87c836cc3f95386edec25c767e6b58452685891485c77b0dde78d8f94dccc8b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feistel, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:53:53 compute-0 systemd[1]: libpod-conmon-87c836cc3f95386edec25c767e6b58452685891485c77b0dde78d8f94dccc8b5.scope: Deactivated successfully.
Oct  9 09:53:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:53.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:53 compute-0 podman[190989]: 2025-10-09 09:53:53.748952299 +0000 UTC m=+0.027839009 container create 5f753e73772b21648745a959a57c735ccb561b64ff3be1304778951f2d3f631e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_galois, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 09:53:53 compute-0 systemd[1]: Started libpod-conmon-5f753e73772b21648745a959a57c735ccb561b64ff3be1304778951f2d3f631e.scope.
Oct  9 09:53:53 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:53:53 compute-0 podman[190989]: 2025-10-09 09:53:53.80633401 +0000 UTC m=+0.085220719 container init 5f753e73772b21648745a959a57c735ccb561b64ff3be1304778951f2d3f631e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_galois, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:53:53 compute-0 podman[190989]: 2025-10-09 09:53:53.812767703 +0000 UTC m=+0.091654402 container start 5f753e73772b21648745a959a57c735ccb561b64ff3be1304778951f2d3f631e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct  9 09:53:53 compute-0 podman[190989]: 2025-10-09 09:53:53.814348634 +0000 UTC m=+0.093235352 container attach 5f753e73772b21648745a959a57c735ccb561b64ff3be1304778951f2d3f631e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:53:53 compute-0 sad_galois[191002]: 167 167
Oct  9 09:53:53 compute-0 systemd[1]: libpod-5f753e73772b21648745a959a57c735ccb561b64ff3be1304778951f2d3f631e.scope: Deactivated successfully.
Oct  9 09:53:53 compute-0 podman[190989]: 2025-10-09 09:53:53.816369354 +0000 UTC m=+0.095256053 container died 5f753e73772b21648745a959a57c735ccb561b64ff3be1304778951f2d3f631e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:53:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccf5507bbd3c2b8157ccd3631d17ad17fc739bd853f99e19b5d67be379af51d4-merged.mount: Deactivated successfully.
Oct  9 09:53:53 compute-0 podman[190989]: 2025-10-09 09:53:53.83325613 +0000 UTC m=+0.112142828 container remove 5f753e73772b21648745a959a57c735ccb561b64ff3be1304778951f2d3f631e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_galois, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:53:53 compute-0 podman[190989]: 2025-10-09 09:53:53.737717003 +0000 UTC m=+0.016603722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:53:53 compute-0 systemd[1]: libpod-conmon-5f753e73772b21648745a959a57c735ccb561b64ff3be1304778951f2d3f631e.scope: Deactivated successfully.
Oct  9 09:53:53 compute-0 podman[191024]: 2025-10-09 09:53:53.962980031 +0000 UTC m=+0.031123994 container create 39035b082f87238d8f1e12a1fcdf6efeb692d92afb4444dbe156643c31122e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_williams, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:53:53 compute-0 systemd[1]: Started libpod-conmon-39035b082f87238d8f1e12a1fcdf6efeb692d92afb4444dbe156643c31122e60.scope.
Oct  9 09:53:54 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37c451b73baf08e6c18de49252a06fb5af8ce79857b2276e23ecf05d256cd1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37c451b73baf08e6c18de49252a06fb5af8ce79857b2276e23ecf05d256cd1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37c451b73baf08e6c18de49252a06fb5af8ce79857b2276e23ecf05d256cd1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37c451b73baf08e6c18de49252a06fb5af8ce79857b2276e23ecf05d256cd1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:54 compute-0 podman[191024]: 2025-10-09 09:53:54.028909942 +0000 UTC m=+0.097053896 container init 39035b082f87238d8f1e12a1fcdf6efeb692d92afb4444dbe156643c31122e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_williams, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:53:54 compute-0 podman[191024]: 2025-10-09 09:53:54.034456993 +0000 UTC m=+0.102600945 container start 39035b082f87238d8f1e12a1fcdf6efeb692d92afb4444dbe156643c31122e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_williams, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 09:53:54 compute-0 podman[191024]: 2025-10-09 09:53:54.035832927 +0000 UTC m=+0.103976881 container attach 39035b082f87238d8f1e12a1fcdf6efeb692d92afb4444dbe156643c31122e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:53:54 compute-0 podman[191024]: 2025-10-09 09:53:53.94973761 +0000 UTC m=+0.017881582 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:53:54 compute-0 goofy_williams[191037]: {
Oct  9 09:53:54 compute-0 goofy_williams[191037]:    "1": [
Oct  9 09:53:54 compute-0 goofy_williams[191037]:        {
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "devices": [
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "/dev/loop3"
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            ],
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "lv_name": "ceph_lv0",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "lv_size": "21470642176",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "name": "ceph_lv0",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "tags": {
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.cluster_name": "ceph",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.crush_device_class": "",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.encrypted": "0",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.osd_id": "1",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.type": "block",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.vdo": "0",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:                "ceph.with_tpm": "0"
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            },
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "type": "block",
Oct  9 09:53:54 compute-0 goofy_williams[191037]:            "vg_name": "ceph_vg0"
Oct  9 09:53:54 compute-0 goofy_williams[191037]:        }
Oct  9 09:53:54 compute-0 goofy_williams[191037]:    ]
Oct  9 09:53:54 compute-0 goofy_williams[191037]: }
Oct  9 09:53:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 09:53:54 compute-0 systemd[1]: libpod-39035b082f87238d8f1e12a1fcdf6efeb692d92afb4444dbe156643c31122e60.scope: Deactivated successfully.
Oct  9 09:53:54 compute-0 podman[191047]: 2025-10-09 09:53:54.315283483 +0000 UTC m=+0.019824065 container died 39035b082f87238d8f1e12a1fcdf6efeb692d92afb4444dbe156643c31122e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d37c451b73baf08e6c18de49252a06fb5af8ce79857b2276e23ecf05d256cd1a-merged.mount: Deactivated successfully.
Oct  9 09:53:54 compute-0 podman[191047]: 2025-10-09 09:53:54.336861124 +0000 UTC m=+0.041401686 container remove 39035b082f87238d8f1e12a1fcdf6efeb692d92afb4444dbe156643c31122e60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Oct  9 09:53:54 compute-0 systemd[1]: libpod-conmon-39035b082f87238d8f1e12a1fcdf6efeb692d92afb4444dbe156643c31122e60.scope: Deactivated successfully.
Oct  9 09:53:54 compute-0 podman[191141]: 2025-10-09 09:53:54.795346842 +0000 UTC m=+0.027246481 container create 65e2265daf6653d9962a808fb447407487617a32403a7ba13c39c030a9ccfcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Oct  9 09:53:54 compute-0 systemd[1]: Started libpod-conmon-65e2265daf6653d9962a808fb447407487617a32403a7ba13c39c030a9ccfcc9.scope.
Oct  9 09:53:54 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:53:54 compute-0 podman[191141]: 2025-10-09 09:53:54.854978425 +0000 UTC m=+0.086878074 container init 65e2265daf6653d9962a808fb447407487617a32403a7ba13c39c030a9ccfcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_bell, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 09:53:54 compute-0 podman[191141]: 2025-10-09 09:53:54.860013501 +0000 UTC m=+0.091913139 container start 65e2265daf6653d9962a808fb447407487617a32403a7ba13c39c030a9ccfcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_bell, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:53:54 compute-0 podman[191141]: 2025-10-09 09:53:54.861296029 +0000 UTC m=+0.093195689 container attach 65e2265daf6653d9962a808fb447407487617a32403a7ba13c39c030a9ccfcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 09:53:54 compute-0 jolly_bell[191154]: 167 167
Oct  9 09:53:54 compute-0 systemd[1]: libpod-65e2265daf6653d9962a808fb447407487617a32403a7ba13c39c030a9ccfcc9.scope: Deactivated successfully.
Oct  9 09:53:54 compute-0 podman[191141]: 2025-10-09 09:53:54.862774126 +0000 UTC m=+0.094673935 container died 65e2265daf6653d9962a808fb447407487617a32403a7ba13c39c030a9ccfcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e94947f99ab257cbe4696af01b92b189023ff1ef6c57e142d630b9b90d8e90c2-merged.mount: Deactivated successfully.
Oct  9 09:53:54 compute-0 podman[191141]: 2025-10-09 09:53:54.784551074 +0000 UTC m=+0.016450724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:53:54 compute-0 podman[191141]: 2025-10-09 09:53:54.881517151 +0000 UTC m=+0.113416791 container remove 65e2265daf6653d9962a808fb447407487617a32403a7ba13c39c030a9ccfcc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 09:53:54 compute-0 systemd[1]: libpod-conmon-65e2265daf6653d9962a808fb447407487617a32403a7ba13c39c030a9ccfcc9.scope: Deactivated successfully.
Oct  9 09:53:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:55 compute-0 podman[191177]: 2025-10-09 09:53:55.012645312 +0000 UTC m=+0.032528633 container create 21721ace7c83352a82db3ab28a603eb893e35a3b09a7c7c398b24f05e676c2dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:53:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:55.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:55 compute-0 systemd[1]: Started libpod-conmon-21721ace7c83352a82db3ab28a603eb893e35a3b09a7c7c398b24f05e676c2dd.scope.
Oct  9 09:53:55 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b825e3bcff302c15e707b26f0f609f50379148813ab31cd4f5de898d3ae36f05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b825e3bcff302c15e707b26f0f609f50379148813ab31cd4f5de898d3ae36f05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b825e3bcff302c15e707b26f0f609f50379148813ab31cd4f5de898d3ae36f05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b825e3bcff302c15e707b26f0f609f50379148813ab31cd4f5de898d3ae36f05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:53:55 compute-0 podman[191177]: 2025-10-09 09:53:55.070958188 +0000 UTC m=+0.090841509 container init 21721ace7c83352a82db3ab28a603eb893e35a3b09a7c7c398b24f05e676c2dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 09:53:55 compute-0 podman[191177]: 2025-10-09 09:53:55.075908523 +0000 UTC m=+0.095791835 container start 21721ace7c83352a82db3ab28a603eb893e35a3b09a7c7c398b24f05e676c2dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:53:55 compute-0 podman[191177]: 2025-10-09 09:53:55.076997547 +0000 UTC m=+0.096880858 container attach 21721ace7c83352a82db3ab28a603eb893e35a3b09a7c7c398b24f05e676c2dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 09:53:55 compute-0 podman[191177]: 2025-10-09 09:53:54.999154671 +0000 UTC m=+0.019038013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:53:55 compute-0 boring_bose[191190]: {}
Oct  9 09:53:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:55.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:55 compute-0 lvm[191267]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:53:55 compute-0 lvm[191267]: VG ceph_vg0 finished
Oct  9 09:53:55 compute-0 systemd[1]: libpod-21721ace7c83352a82db3ab28a603eb893e35a3b09a7c7c398b24f05e676c2dd.scope: Deactivated successfully.
Oct  9 09:53:55 compute-0 podman[191177]: 2025-10-09 09:53:55.576547464 +0000 UTC m=+0.596430785 container died 21721ace7c83352a82db3ab28a603eb893e35a3b09a7c7c398b24f05e676c2dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:53:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b825e3bcff302c15e707b26f0f609f50379148813ab31cd4f5de898d3ae36f05-merged.mount: Deactivated successfully.
Oct  9 09:53:55 compute-0 podman[191177]: 2025-10-09 09:53:55.600045797 +0000 UTC m=+0.619929108 container remove 21721ace7c83352a82db3ab28a603eb893e35a3b09a7c7c398b24f05e676c2dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:53:55 compute-0 systemd[1]: libpod-conmon-21721ace7c83352a82db3ab28a603eb893e35a3b09a7c7c398b24f05e676c2dd.scope: Deactivated successfully.
Oct  9 09:53:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:53:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:53:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:53:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:57.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:57.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:57.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:57.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:57.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 09:53:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:58.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:58.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:58.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:53:58.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:53:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:59.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:53:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:53:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:53:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:59.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:53:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 09:54:00 compute-0 podman[191308]: 2025-10-09 09:54:00.623900372 +0000 UTC m=+0.060661046 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 09:54:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:01.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:01.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:54:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:02] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:54:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:02] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:54:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:03.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:03.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:54:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:54:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:54:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:05.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:05.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:54:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:07.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:07.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:07.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:07.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:07.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:07.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:54:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:08.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:08.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:08.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:08.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:09.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:09.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.713204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649713226, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 730, "num_deletes": 251, "total_data_size": 1047417, "memory_usage": 1068176, "flush_reason": "Manual Compaction"}
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649716980, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1032670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19150, "largest_seqno": 19879, "table_properties": {"data_size": 1028915, "index_size": 1535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8578, "raw_average_key_size": 19, "raw_value_size": 1021376, "raw_average_value_size": 2316, "num_data_blocks": 68, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003597, "oldest_key_time": 1760003597, "file_creation_time": 1760003649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3810 microseconds, and 2772 cpu microseconds.
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717012) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1032670 bytes OK
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717025) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717355) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717365) EVENT_LOG_v1 {"time_micros": 1760003649717362, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717374) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1043748, prev total WAL file size 1043748, number of live WAL files 2.
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717679) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1008KB)], [41(13MB)]
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649717705, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 14787483, "oldest_snapshot_seqno": -1}
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4938 keys, 12621235 bytes, temperature: kUnknown
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649755789, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 12621235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12587542, "index_size": 20271, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 125903, "raw_average_key_size": 25, "raw_value_size": 12496853, "raw_average_value_size": 2530, "num_data_blocks": 833, "num_entries": 4938, "num_filter_entries": 4938, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760003649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756176) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12621235 bytes
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756675) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 386.3 rd, 329.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.1 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(26.5) write-amplify(12.2) OK, records in: 5454, records dropped: 516 output_compression: NoCompression
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756693) EVENT_LOG_v1 {"time_micros": 1760003649756683, "job": 20, "event": "compaction_finished", "compaction_time_micros": 38276, "compaction_time_cpu_micros": 20862, "output_level": 6, "num_output_files": 1, "total_output_size": 12621235, "num_input_records": 5454, "num_output_records": 4938, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649757291, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649759753, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.759895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.759900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.759902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.759903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:54:09.759904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:54:10.106 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:54:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:54:10.106 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:54:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:54:10.107 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:54:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct  9 09:54:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:54:10 compute-0 podman[191341]: 2025-10-09 09:54:10.602730561 +0000 UTC m=+0.040050729 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  9 09:54:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:11.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:11.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:12] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:54:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:12] "GET /metrics HTTP/1.1" 200 48423 "" "Prometheus/2.51.0"
Oct  9 09:54:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:54:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:13.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:13.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:54:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:15.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:15.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:54:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:17.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:17.045Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:17.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:17.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:17.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:17.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:54:18 compute-0 podman[191391]: 2025-10-09 09:54:18.609880179 +0000 UTC m=+0.044890355 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:54:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:18.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:18.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:18.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:18.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:19.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:54:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:54:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:19.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:54:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:54:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:54:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:54:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:54:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:54:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:54:20 compute-0 podman[191409]: 2025-10-09 09:54:20.612018204 +0000 UTC m=+0.053124011 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:54:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:21.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:21.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:22] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Oct  9 09:54:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:22] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Oct  9 09:54:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:54:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:23.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:23.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:54:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:25.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:25.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:54:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:27.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:27.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:27.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:27.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:27.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:27.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:54:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=404 latency=0.001000011s ======
Oct  9 09:54:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:28.647 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.001000011s
Oct  9 09:54:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - - [09/Oct/2025:09:54:28.656 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000011s
Oct  9 09:54:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:28.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:28.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:28.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:28.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:29.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:29 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct  9 09:54:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:29.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:54:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:31 compute-0 podman[191460]: 2025-10-09 09:54:31.355982817 +0000 UTC m=+0.095875242 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:54:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:31.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:32] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Oct  9 09:54:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:32] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Oct  9 09:54:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:54:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct  9 09:54:32 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct  9 09:54:32 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct  9 09:54:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:33.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct  9 09:54:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct  9 09:54:33 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct  9 09:54:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Oct  9 09:54:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:54:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:54:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct  9 09:54:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct  9 09:54:34 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct  9 09:54:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:35.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:35.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct  9 09:54:35 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct  9 09:54:35 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct  9 09:54:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v637: 337 pgs: 337 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.1 MiB/s wr, 68 op/s
Oct  9 09:54:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:37.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:37.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:37.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:37.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:37.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:37.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v638: 337 pgs: 337 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.7 MiB/s wr, 50 op/s
Oct  9 09:54:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:38.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:38.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:38.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:38.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:39.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:39.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v639: 337 pgs: 337 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 MiB/s wr, 42 op/s
Oct  9 09:54:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:41.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct  9 09:54:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct  9 09:54:41 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct  9 09:54:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:41.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:41 compute-0 podman[191496]: 2025-10-09 09:54:41.614737656 +0000 UTC m=+0.045836198 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid)
Oct  9 09:54:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:42] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Oct  9 09:54:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:42] "GET /metrics HTTP/1.1" 200 48422 "" "Prometheus/2.51.0"
Oct  9 09:54:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v641: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 5.5 MiB/s wr, 52 op/s
Oct  9 09:54:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:43.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:43.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v642: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.4 MiB/s wr, 14 op/s
Oct  9 09:54:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:45.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:45.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v643: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Oct  9 09:54:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:47.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:47.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:47.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:47.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:47.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:47.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.264 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.265 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.265 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:54:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v644: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Oct  9 09:54:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:54:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012396778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.633 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:54:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:48.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:48.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:48.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:48.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.891 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.893 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5062MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.893 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.893 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.946 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.947 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:54:48 compute-0 nova_compute[187439]: 2025-10-09 09:54:48.968 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:54:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:49.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:49 compute-0 nova_compute[187439]: 2025-10-09 09:54:49.355 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:54:49 compute-0 nova_compute[187439]: 2025-10-09 09:54:49.360 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:54:49
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', '.nfs', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.control', 'images', 'default.rgw.log']
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:54:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:54:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:54:49 compute-0 nova_compute[187439]: 2025-10-09 09:54:49.597 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:54:49 compute-0 nova_compute[187439]: 2025-10-09 09:54:49.599 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:54:49 compute-0 nova_compute[187439]: 2025-10-09 09:54:49.599 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:54:49 compute-0 podman[191565]: 2025-10-09 09:54:49.609849395 +0000 UTC m=+0.043950172 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  9 09:54:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:49.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:54:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:54:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v645: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Oct  9 09:54:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:54:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7350 writes, 29K keys, 7350 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7350 writes, 1505 syncs, 4.88 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 500 writes, 874 keys, 500 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s#012Interval WAL: 500 writes, 241 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Oct  9 09:54:50 compute-0 nova_compute[187439]: 2025-10-09 09:54:50.599 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:50 compute-0 nova_compute[187439]: 2025-10-09 09:54:50.599 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:50 compute-0 nova_compute[187439]: 2025-10-09 09:54:50.600 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:54:50 compute-0 nova_compute[187439]: 2025-10-09 09:54:50.600 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:54:50 compute-0 nova_compute[187439]: 2025-10-09 09:54:50.613 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:54:50 compute-0 nova_compute[187439]: 2025-10-09 09:54:50.613 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:50 compute-0 nova_compute[187439]: 2025-10-09 09:54:50.613 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:50 compute-0 nova_compute[187439]: 2025-10-09 09:54:50.614 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:50 compute-0 nova_compute[187439]: 2025-10-09 09:54:50.614 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:50 compute-0 nova_compute[187439]: 2025-10-09 09:54:50.614 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:51.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:51 compute-0 nova_compute[187439]: 2025-10-09 09:54:51.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:51 compute-0 nova_compute[187439]: 2025-10-09 09:54:51.259 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:51 compute-0 nova_compute[187439]: 2025-10-09 09:54:51.259 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:54:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:51 compute-0 podman[191609]: 2025-10-09 09:54:51.421018383 +0000 UTC m=+0.077188563 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 09:54:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:51.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:54:52.051 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:54:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:54:52.052 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:54:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:52] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct  9 09:54:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:54:52] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct  9 09:54:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v646: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:54:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:53.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:53.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v647: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:54:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:55.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:55.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:54:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:54:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v648: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:54:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:54:56 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:54:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:54:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:54:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v649: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:54:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:54:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:54:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:54:56 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:54:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:54:56 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:54:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:54:56 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:54:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:57.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:57.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:57.049Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:57.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:57.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:54:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:54:57 compute-0 podman[191861]: 2025-10-09 09:54:57.272940302 +0000 UTC m=+0.040420525 container create c9d6b18c0a13acda580a7a379e549f91e3fe1efe85d2bdfd8bb8712d6d4dfa1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:54:57 compute-0 systemd[1]: Started libpod-conmon-c9d6b18c0a13acda580a7a379e549f91e3fe1efe85d2bdfd8bb8712d6d4dfa1c.scope.
Oct  9 09:54:57 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:54:57 compute-0 podman[191861]: 2025-10-09 09:54:57.34959647 +0000 UTC m=+0.117076703 container init c9d6b18c0a13acda580a7a379e549f91e3fe1efe85d2bdfd8bb8712d6d4dfa1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_varahamihira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:54:57 compute-0 podman[191861]: 2025-10-09 09:54:57.258654311 +0000 UTC m=+0.026134545 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:54:57 compute-0 podman[191861]: 2025-10-09 09:54:57.355759852 +0000 UTC m=+0.123240066 container start c9d6b18c0a13acda580a7a379e549f91e3fe1efe85d2bdfd8bb8712d6d4dfa1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:54:57 compute-0 podman[191861]: 2025-10-09 09:54:57.356897678 +0000 UTC m=+0.124377891 container attach c9d6b18c0a13acda580a7a379e549f91e3fe1efe85d2bdfd8bb8712d6d4dfa1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_varahamihira, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 09:54:57 compute-0 hungry_varahamihira[191875]: 167 167
Oct  9 09:54:57 compute-0 systemd[1]: libpod-c9d6b18c0a13acda580a7a379e549f91e3fe1efe85d2bdfd8bb8712d6d4dfa1c.scope: Deactivated successfully.
Oct  9 09:54:57 compute-0 conmon[191875]: conmon c9d6b18c0a13acda580a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9d6b18c0a13acda580a7a379e549f91e3fe1efe85d2bdfd8bb8712d6d4dfa1c.scope/container/memory.events
Oct  9 09:54:57 compute-0 podman[191861]: 2025-10-09 09:54:57.361869284 +0000 UTC m=+0.129349497 container died c9d6b18c0a13acda580a7a379e549f91e3fe1efe85d2bdfd8bb8712d6d4dfa1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_varahamihira, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:54:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b3b451515de3b606de41cf5fd4495203b1f1feb76b26e94b54a71fc5843829d-merged.mount: Deactivated successfully.
Oct  9 09:54:57 compute-0 podman[191861]: 2025-10-09 09:54:57.387867501 +0000 UTC m=+0.155347714 container remove c9d6b18c0a13acda580a7a379e549f91e3fe1efe85d2bdfd8bb8712d6d4dfa1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:54:57 compute-0 systemd[1]: libpod-conmon-c9d6b18c0a13acda580a7a379e549f91e3fe1efe85d2bdfd8bb8712d6d4dfa1c.scope: Deactivated successfully.
Oct  9 09:54:57 compute-0 podman[191897]: 2025-10-09 09:54:57.520487104 +0000 UTC m=+0.031114698 container create 82a02a9c7890315d9d8b5b2ba54d117834e9acdd2a7434a383003bd807053253 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euler, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:54:57 compute-0 systemd[1]: Started libpod-conmon-82a02a9c7890315d9d8b5b2ba54d117834e9acdd2a7434a383003bd807053253.scope.
Oct  9 09:54:57 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4346b85ccc01428b757e5ba65dac04da8a68ea66591e68378087cabb8e332e7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4346b85ccc01428b757e5ba65dac04da8a68ea66591e68378087cabb8e332e7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4346b85ccc01428b757e5ba65dac04da8a68ea66591e68378087cabb8e332e7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4346b85ccc01428b757e5ba65dac04da8a68ea66591e68378087cabb8e332e7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4346b85ccc01428b757e5ba65dac04da8a68ea66591e68378087cabb8e332e7e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:57 compute-0 podman[191897]: 2025-10-09 09:54:57.582005937 +0000 UTC m=+0.092633529 container init 82a02a9c7890315d9d8b5b2ba54d117834e9acdd2a7434a383003bd807053253 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:54:57 compute-0 podman[191897]: 2025-10-09 09:54:57.589351978 +0000 UTC m=+0.099979572 container start 82a02a9c7890315d9d8b5b2ba54d117834e9acdd2a7434a383003bd807053253 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euler, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:54:57 compute-0 podman[191897]: 2025-10-09 09:54:57.592211311 +0000 UTC m=+0.102838905 container attach 82a02a9c7890315d9d8b5b2ba54d117834e9acdd2a7434a383003bd807053253 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 09:54:57 compute-0 podman[191897]: 2025-10-09 09:54:57.509059523 +0000 UTC m=+0.019687137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:54:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:57.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:57 compute-0 quirky_euler[191911]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:54:57 compute-0 quirky_euler[191911]: --> All data devices are unavailable
Oct  9 09:54:57 compute-0 systemd[1]: libpod-82a02a9c7890315d9d8b5b2ba54d117834e9acdd2a7434a383003bd807053253.scope: Deactivated successfully.
Oct  9 09:54:57 compute-0 podman[191897]: 2025-10-09 09:54:57.905609425 +0000 UTC m=+0.416237028 container died 82a02a9c7890315d9d8b5b2ba54d117834e9acdd2a7434a383003bd807053253 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:54:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4346b85ccc01428b757e5ba65dac04da8a68ea66591e68378087cabb8e332e7e-merged.mount: Deactivated successfully.
Oct  9 09:54:57 compute-0 podman[191897]: 2025-10-09 09:54:57.930013386 +0000 UTC m=+0.440640979 container remove 82a02a9c7890315d9d8b5b2ba54d117834e9acdd2a7434a383003bd807053253 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euler, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:54:57 compute-0 systemd[1]: libpod-conmon-82a02a9c7890315d9d8b5b2ba54d117834e9acdd2a7434a383003bd807053253.scope: Deactivated successfully.
Oct  9 09:54:58 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:54:58.054 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:54:58 compute-0 podman[192018]: 2025-10-09 09:54:58.438572311 +0000 UTC m=+0.037772581 container create 3f3b7ea8872ca9b5a5ff9931ec7932b8c938c6414329d35fa7c3a09900a91fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_golick, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:54:58 compute-0 systemd[1]: Started libpod-conmon-3f3b7ea8872ca9b5a5ff9931ec7932b8c938c6414329d35fa7c3a09900a91fd6.scope.
Oct  9 09:54:58 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:54:58 compute-0 podman[192018]: 2025-10-09 09:54:58.496193984 +0000 UTC m=+0.095394264 container init 3f3b7ea8872ca9b5a5ff9931ec7932b8c938c6414329d35fa7c3a09900a91fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:54:58 compute-0 podman[192018]: 2025-10-09 09:54:58.503230985 +0000 UTC m=+0.102431234 container start 3f3b7ea8872ca9b5a5ff9931ec7932b8c938c6414329d35fa7c3a09900a91fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_golick, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:54:58 compute-0 podman[192018]: 2025-10-09 09:54:58.505045696 +0000 UTC m=+0.104245956 container attach 3f3b7ea8872ca9b5a5ff9931ec7932b8c938c6414329d35fa7c3a09900a91fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_golick, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:54:58 compute-0 friendly_golick[192030]: 167 167
Oct  9 09:54:58 compute-0 systemd[1]: libpod-3f3b7ea8872ca9b5a5ff9931ec7932b8c938c6414329d35fa7c3a09900a91fd6.scope: Deactivated successfully.
Oct  9 09:54:58 compute-0 conmon[192030]: conmon 3f3b7ea8872ca9b5a5ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f3b7ea8872ca9b5a5ff9931ec7932b8c938c6414329d35fa7c3a09900a91fd6.scope/container/memory.events
Oct  9 09:54:58 compute-0 podman[192018]: 2025-10-09 09:54:58.507723316 +0000 UTC m=+0.106923576 container died 3f3b7ea8872ca9b5a5ff9931ec7932b8c938c6414329d35fa7c3a09900a91fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_golick, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:54:58 compute-0 podman[192018]: 2025-10-09 09:54:58.42519772 +0000 UTC m=+0.024397980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:54:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7eedf286fee18d2bfdece840ad6df63ce05fe10d2364dbbddddcd7fd105aeac4-merged.mount: Deactivated successfully.
Oct  9 09:54:58 compute-0 podman[192018]: 2025-10-09 09:54:58.531673841 +0000 UTC m=+0.130874101 container remove 3f3b7ea8872ca9b5a5ff9931ec7932b8c938c6414329d35fa7c3a09900a91fd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_golick, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:54:58 compute-0 systemd[1]: libpod-conmon-3f3b7ea8872ca9b5a5ff9931ec7932b8c938c6414329d35fa7c3a09900a91fd6.scope: Deactivated successfully.
Oct  9 09:54:58 compute-0 podman[192054]: 2025-10-09 09:54:58.675368439 +0000 UTC m=+0.033251938 container create 77bcbd594d23209f1c2ada1096652cd356c3269f90128ad1ce0e0e28b0c08ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:54:58 compute-0 systemd[1]: Started libpod-conmon-77bcbd594d23209f1c2ada1096652cd356c3269f90128ad1ce0e0e28b0c08ad5.scope.
Oct  9 09:54:58 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9839a5a1233b047ac22768310139bb9c23ed2e66187bc3f9034eacbd4a587a92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9839a5a1233b047ac22768310139bb9c23ed2e66187bc3f9034eacbd4a587a92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9839a5a1233b047ac22768310139bb9c23ed2e66187bc3f9034eacbd4a587a92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9839a5a1233b047ac22768310139bb9c23ed2e66187bc3f9034eacbd4a587a92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:58 compute-0 podman[192054]: 2025-10-09 09:54:58.743364256 +0000 UTC m=+0.101247744 container init 77bcbd594d23209f1c2ada1096652cd356c3269f90128ad1ce0e0e28b0c08ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:54:58 compute-0 podman[192054]: 2025-10-09 09:54:58.750067166 +0000 UTC m=+0.107950655 container start 77bcbd594d23209f1c2ada1096652cd356c3269f90128ad1ce0e0e28b0c08ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:54:58 compute-0 podman[192054]: 2025-10-09 09:54:58.751360003 +0000 UTC m=+0.109243492 container attach 77bcbd594d23209f1c2ada1096652cd356c3269f90128ad1ce0e0e28b0c08ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:54:58 compute-0 podman[192054]: 2025-10-09 09:54:58.662687496 +0000 UTC m=+0.020570985 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:54:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v650: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:54:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:58.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:58.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:58.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:54:58.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:54:58 compute-0 funny_shannon[192068]: {
Oct  9 09:54:58 compute-0 funny_shannon[192068]:    "1": [
Oct  9 09:54:58 compute-0 funny_shannon[192068]:        {
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "devices": [
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "/dev/loop3"
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            ],
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "lv_name": "ceph_lv0",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "lv_size": "21470642176",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "name": "ceph_lv0",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "tags": {
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.cluster_name": "ceph",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.crush_device_class": "",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.encrypted": "0",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.osd_id": "1",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.type": "block",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.vdo": "0",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:                "ceph.with_tpm": "0"
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            },
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "type": "block",
Oct  9 09:54:58 compute-0 funny_shannon[192068]:            "vg_name": "ceph_vg0"
Oct  9 09:54:58 compute-0 funny_shannon[192068]:        }
Oct  9 09:54:58 compute-0 funny_shannon[192068]:    ]
Oct  9 09:54:58 compute-0 funny_shannon[192068]: }
Oct  9 09:54:59 compute-0 systemd[1]: libpod-77bcbd594d23209f1c2ada1096652cd356c3269f90128ad1ce0e0e28b0c08ad5.scope: Deactivated successfully.
Oct  9 09:54:59 compute-0 podman[192054]: 2025-10-09 09:54:59.008041843 +0000 UTC m=+0.365925331 container died 77bcbd594d23209f1c2ada1096652cd356c3269f90128ad1ce0e0e28b0c08ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9839a5a1233b047ac22768310139bb9c23ed2e66187bc3f9034eacbd4a587a92-merged.mount: Deactivated successfully.
Oct  9 09:54:59 compute-0 podman[192054]: 2025-10-09 09:54:59.033424218 +0000 UTC m=+0.391307707 container remove 77bcbd594d23209f1c2ada1096652cd356c3269f90128ad1ce0e0e28b0c08ad5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:54:59 compute-0 systemd[1]: libpod-conmon-77bcbd594d23209f1c2ada1096652cd356c3269f90128ad1ce0e0e28b0c08ad5.scope: Deactivated successfully.
Oct  9 09:54:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:59.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:54:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:54:59 compute-0 podman[192168]: 2025-10-09 09:54:59.529167327 +0000 UTC m=+0.031481508 container create 402be4d574fba1fdcf9075c106a8c2c6e17f9d5e2c74a993f0268cfa6c591fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:54:59 compute-0 systemd[1]: Started libpod-conmon-402be4d574fba1fdcf9075c106a8c2c6e17f9d5e2c74a993f0268cfa6c591fce.scope.
Oct  9 09:54:59 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:54:59 compute-0 podman[192168]: 2025-10-09 09:54:59.573586012 +0000 UTC m=+0.075900213 container init 402be4d574fba1fdcf9075c106a8c2c6e17f9d5e2c74a993f0268cfa6c591fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_poitras, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:54:59 compute-0 podman[192168]: 2025-10-09 09:54:59.578203449 +0000 UTC m=+0.080517630 container start 402be4d574fba1fdcf9075c106a8c2c6e17f9d5e2c74a993f0268cfa6c591fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_poitras, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:54:59 compute-0 podman[192168]: 2025-10-09 09:54:59.580654881 +0000 UTC m=+0.082969062 container attach 402be4d574fba1fdcf9075c106a8c2c6e17f9d5e2c74a993f0268cfa6c591fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:54:59 compute-0 zen_poitras[192182]: 167 167
Oct  9 09:54:59 compute-0 systemd[1]: libpod-402be4d574fba1fdcf9075c106a8c2c6e17f9d5e2c74a993f0268cfa6c591fce.scope: Deactivated successfully.
Oct  9 09:54:59 compute-0 podman[192168]: 2025-10-09 09:54:59.581680495 +0000 UTC m=+0.083994677 container died 402be4d574fba1fdcf9075c106a8c2c6e17f9d5e2c74a993f0268cfa6c591fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_poitras, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6300ec2d1d01dae2b80dc987b18188e79a379416215cf50cc120c100ca33823a-merged.mount: Deactivated successfully.
Oct  9 09:54:59 compute-0 podman[192168]: 2025-10-09 09:54:59.601856634 +0000 UTC m=+0.104170814 container remove 402be4d574fba1fdcf9075c106a8c2c6e17f9d5e2c74a993f0268cfa6c591fce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_poitras, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:54:59 compute-0 podman[192168]: 2025-10-09 09:54:59.516543001 +0000 UTC m=+0.018857201 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:54:59 compute-0 systemd[1]: libpod-conmon-402be4d574fba1fdcf9075c106a8c2c6e17f9d5e2c74a993f0268cfa6c591fce.scope: Deactivated successfully.
Oct  9 09:54:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:54:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:59.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:59 compute-0 podman[192203]: 2025-10-09 09:54:59.738891663 +0000 UTC m=+0.033710833 container create 41af479f55ec26ada038f782ba28b1df5871b7344fcb7990ee7bd2c6fb8dda1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_stonebraker, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:54:59 compute-0 systemd[1]: Started libpod-conmon-41af479f55ec26ada038f782ba28b1df5871b7344fcb7990ee7bd2c6fb8dda1b.scope.
Oct  9 09:54:59 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75682898e7539bbe212c23f0e65ec00b023216239fa749cc2b6c2970fda3b4e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75682898e7539bbe212c23f0e65ec00b023216239fa749cc2b6c2970fda3b4e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75682898e7539bbe212c23f0e65ec00b023216239fa749cc2b6c2970fda3b4e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75682898e7539bbe212c23f0e65ec00b023216239fa749cc2b6c2970fda3b4e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:54:59 compute-0 podman[192203]: 2025-10-09 09:54:59.803258706 +0000 UTC m=+0.098077876 container init 41af479f55ec26ada038f782ba28b1df5871b7344fcb7990ee7bd2c6fb8dda1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_stonebraker, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:54:59 compute-0 podman[192203]: 2025-10-09 09:54:59.809233504 +0000 UTC m=+0.104052673 container start 41af479f55ec26ada038f782ba28b1df5871b7344fcb7990ee7bd2c6fb8dda1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:54:59 compute-0 podman[192203]: 2025-10-09 09:54:59.810733593 +0000 UTC m=+0.105552772 container attach 41af479f55ec26ada038f782ba28b1df5871b7344fcb7990ee7bd2c6fb8dda1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_stonebraker, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:54:59 compute-0 podman[192203]: 2025-10-09 09:54:59.725388409 +0000 UTC m=+0.020207597 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:55:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:54:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:00 compute-0 funny_stonebraker[192217]: {}
Oct  9 09:55:00 compute-0 lvm[192295]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:55:00 compute-0 lvm[192295]: VG ceph_vg0 finished
Oct  9 09:55:00 compute-0 systemd[1]: libpod-41af479f55ec26ada038f782ba28b1df5871b7344fcb7990ee7bd2c6fb8dda1b.scope: Deactivated successfully.
Oct  9 09:55:00 compute-0 systemd[1]: libpod-41af479f55ec26ada038f782ba28b1df5871b7344fcb7990ee7bd2c6fb8dda1b.scope: Consumed 1.002s CPU time.
Oct  9 09:55:00 compute-0 podman[192203]: 2025-10-09 09:55:00.435595101 +0000 UTC m=+0.730414271 container died 41af479f55ec26ada038f782ba28b1df5871b7344fcb7990ee7bd2c6fb8dda1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_stonebraker, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-75682898e7539bbe212c23f0e65ec00b023216239fa749cc2b6c2970fda3b4e2-merged.mount: Deactivated successfully.
Oct  9 09:55:00 compute-0 podman[192203]: 2025-10-09 09:55:00.468910306 +0000 UTC m=+0.763729475 container remove 41af479f55ec26ada038f782ba28b1df5871b7344fcb7990ee7bd2c6fb8dda1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct  9 09:55:00 compute-0 systemd[1]: libpod-conmon-41af479f55ec26ada038f782ba28b1df5871b7344fcb7990ee7bd2c6fb8dda1b.scope: Deactivated successfully.
Oct  9 09:55:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:55:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:55:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:55:00 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:55:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v651: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 09:55:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:01.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:01 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:55:01 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:55:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:01.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:01 compute-0 podman[192331]: 2025-10-09 09:55:01.651919576 +0000 UTC m=+0.083500025 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct  9 09:55:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:02] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct  9 09:55:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:02] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Oct  9 09:55:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v652: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:55:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:03.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:03.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:55:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:55:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v653: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:55:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:05.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:05.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v654: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 09:55:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:07.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:07.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:07.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:07.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:55:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:07.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:55:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:07.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v655: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:55:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:08.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:08.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:08.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:08.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:09.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:09.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:10.107 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:10.108 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:10.108 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v656: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:55:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:11.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:11 compute-0 nova_compute[187439]: 2025-10-09 09:55:11.482 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "bb0dd1df-5930-471c-a79b-b51d83e9431b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:11 compute-0 nova_compute[187439]: 2025-10-09 09:55:11.483 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:11 compute-0 nova_compute[187439]: 2025-10-09 09:55:11.496 2 DEBUG nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 09:55:11 compute-0 nova_compute[187439]: 2025-10-09 09:55:11.574 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:11 compute-0 nova_compute[187439]: 2025-10-09 09:55:11.574 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:11 compute-0 nova_compute[187439]: 2025-10-09 09:55:11.580 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 09:55:11 compute-0 nova_compute[187439]: 2025-10-09 09:55:11.580 2 INFO nova.compute.claims [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  9 09:55:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:11.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:11 compute-0 nova_compute[187439]: 2025-10-09 09:55:11.657 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 09:55:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577999995' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 09:55:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 09:55:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577999995' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 09:55:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:55:12 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002258365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.045 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.052 2 DEBUG nova.compute.provider_tree [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.065 2 DEBUG nova.scheduler.client.report [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.081 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.082 2 DEBUG nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.115 2 DEBUG nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.115 2 DEBUG nova.network.neutron [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.139 2 INFO nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.150 2 DEBUG nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.232 2 DEBUG nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.233 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.233 2 INFO nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Creating image(s)#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.260 2 DEBUG nova.storage.rbd_utils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image bb0dd1df-5930-471c-a79b-b51d83e9431b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:12] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  9 09:55:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:12] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.285 2 DEBUG nova.storage.rbd_utils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image bb0dd1df-5930-471c-a79b-b51d83e9431b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.311 2 DEBUG nova.storage.rbd_utils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image bb0dd1df-5930-471c-a79b-b51d83e9431b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.317 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.318 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:12 compute-0 podman[192467]: 2025-10-09 09:55:12.61318399 +0000 UTC m=+0.045773952 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  9 09:55:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v657: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.900 2 DEBUG nova.virt.libvirt.imagebackend [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image locations are: [{'url': 'rbd://286f8bf0-da72-5823-9a4e-ac4457d9e609/images/9546778e-959c-466e-9bef-81ace5bd1cc5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://286f8bf0-da72-5823-9a4e-ac4457d9e609/images/9546778e-959c-466e-9bef-81ace5bd1cc5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.958 2 WARNING oslo_policy.policy [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.959 2 WARNING oslo_policy.policy [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  9 09:55:12 compute-0 nova_compute[187439]: 2025-10-09 09:55:12.961 2 DEBUG nova.policy [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 09:55:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:13.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:13.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.757 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.815 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.816 2 DEBUG nova.virt.images [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] 9546778e-959c-466e-9bef-81ace5bd1cc5 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.818 2 DEBUG nova.privsep.utils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.818 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.891 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.896 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.957 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.958 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.983 2 DEBUG nova.storage.rbd_utils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image bb0dd1df-5930-471c-a79b-b51d83e9431b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:13 compute-0 nova_compute[187439]: 2025-10-09 09:55:13.988 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb bb0dd1df-5930-471c-a79b-b51d83e9431b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v658: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:55:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct  9 09:55:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct  9 09:55:14 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct  9 09:55:14 compute-0 nova_compute[187439]: 2025-10-09 09:55:14.979 2 DEBUG nova.network.neutron [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Successfully created port: 5ebc58bd-1327-457d-a25b-9c56c1001f06 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 09:55:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:55:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:15.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:55:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:15.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct  9 09:55:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct  9 09:55:15 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.089 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb bb0dd1df-5930-471c-a79b-b51d83e9431b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.146 2 DEBUG nova.storage.rbd_utils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image bb0dd1df-5930-471c-a79b-b51d83e9431b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.206 2 DEBUG nova.objects.instance [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid bb0dd1df-5930-471c-a79b-b51d83e9431b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.216 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.216 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Ensure instance console log exists: /var/lib/nova/instances/bb0dd1df-5930-471c-a79b-b51d83e9431b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.217 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.217 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.217 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.352 2 DEBUG nova.network.neutron [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Successfully updated port: 5ebc58bd-1327-457d-a25b-9c56c1001f06 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.369 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.370 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.370 2 DEBUG nova.network.neutron [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 09:55:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v661: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 11 op/s
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.796 2 DEBUG nova.compute.manager [req-662f04d8-9300-4211-b06f-4988246fa63a req-da1d3610-3cc3-429f-8fd6-2f787fc4bb12 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received event network-changed-5ebc58bd-1327-457d-a25b-9c56c1001f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.798 2 DEBUG nova.compute.manager [req-662f04d8-9300-4211-b06f-4988246fa63a req-da1d3610-3cc3-429f-8fd6-2f787fc4bb12 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Refreshing instance network info cache due to event network-changed-5ebc58bd-1327-457d-a25b-9c56c1001f06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.798 2 DEBUG oslo_concurrency.lockutils [req-662f04d8-9300-4211-b06f-4988246fa63a req-da1d3610-3cc3-429f-8fd6-2f787fc4bb12 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:55:16 compute-0 nova_compute[187439]: 2025-10-09 09:55:16.912 2 DEBUG nova.network.neutron [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 09:55:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:17.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:17.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:17.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:17.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:17.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.462 2 DEBUG nova.network.neutron [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Updating instance_info_cache with network_info: [{"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.476 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.477 2 DEBUG nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Instance network_info: |[{"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.477 2 DEBUG oslo_concurrency.lockutils [req-662f04d8-9300-4211-b06f-4988246fa63a req-da1d3610-3cc3-429f-8fd6-2f787fc4bb12 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.477 2 DEBUG nova.network.neutron [req-662f04d8-9300-4211-b06f-4988246fa63a req-da1d3610-3cc3-429f-8fd6-2f787fc4bb12 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Refreshing network info cache for port 5ebc58bd-1327-457d-a25b-9c56c1001f06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.480 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Start _get_guest_xml network_info=[{"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'encryption_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'guest_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.484 2 WARNING nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.488 2 DEBUG nova.virt.libvirt.host [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.488 2 DEBUG nova.virt.libvirt.host [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.493 2 DEBUG nova.virt.libvirt.host [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.494 2 DEBUG nova.virt.libvirt.host [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.494 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.494 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.495 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.495 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.495 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.495 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.495 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.496 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.496 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.496 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.496 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.497 2 DEBUG nova.virt.hardware [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.508 2 DEBUG nova.privsep.utils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.510 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:17.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:55:17 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1547011463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.886 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.917 2 DEBUG nova.storage.rbd_utils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image bb0dd1df-5930-471c-a79b-b51d83e9431b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:17 compute-0 nova_compute[187439]: 2025-10-09 09:55:17.921 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:55:18 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/980579080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.282 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.284 2 DEBUG nova.virt.libvirt.vif [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1027666294',display_name='tempest-TestNetworkBasicOps-server-1027666294',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1027666294',id=1,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbqXXzO4EL6O7qoSjI6lvqp48ZKfLgqTsWRFa/6Ez5EN4tUY5bL3HEiWU6aomP3iRdq/9JJnaMZ+I5jCxjRHt6+P+gstplvEf4nanxNL34YzLOWaL1PMwWFpFUmL3vFew==',key_name='tempest-TestNetworkBasicOps-219100258',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-2454p41c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:55:12Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=bb0dd1df-5930-471c-a79b-b51d83e9431b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.284 2 DEBUG nova.network.os_vif_util [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.285 2 DEBUG nova.network.os_vif_util [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:ca:a2,bridge_name='br-int',has_traffic_filtering=True,id=5ebc58bd-1327-457d-a25b-9c56c1001f06,network=Network(55d0b606-ef1d-4562-907e-2ce1c8e82d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ebc58bd-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.287 2 DEBUG nova.objects.instance [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb0dd1df-5930-471c-a79b-b51d83e9431b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.301 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] End _get_guest_xml xml=<domain type="kvm">
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <uuid>bb0dd1df-5930-471c-a79b-b51d83e9431b</uuid>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <name>instance-00000001</name>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <memory>131072</memory>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <vcpu>1</vcpu>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <metadata>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <nova:name>tempest-TestNetworkBasicOps-server-1027666294</nova:name>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <nova:creationTime>2025-10-09 09:55:17</nova:creationTime>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <nova:flavor name="m1.nano">
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <nova:memory>128</nova:memory>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <nova:disk>1</nova:disk>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <nova:swap>0</nova:swap>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <nova:vcpus>1</nova:vcpus>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      </nova:flavor>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <nova:owner>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      </nova:owner>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <nova:ports>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <nova:port uuid="5ebc58bd-1327-457d-a25b-9c56c1001f06">
Oct  9 09:55:18 compute-0 nova_compute[187439]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        </nova:port>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      </nova:ports>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    </nova:instance>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  </metadata>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <sysinfo type="smbios">
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <system>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <entry name="manufacturer">RDO</entry>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <entry name="product">OpenStack Compute</entry>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <entry name="serial">bb0dd1df-5930-471c-a79b-b51d83e9431b</entry>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <entry name="uuid">bb0dd1df-5930-471c-a79b-b51d83e9431b</entry>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <entry name="family">Virtual Machine</entry>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    </system>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  </sysinfo>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <os>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <boot dev="hd"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <smbios mode="sysinfo"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  </os>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <features>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <acpi/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <apic/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <vmcoreinfo/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  </features>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <clock offset="utc">
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <timer name="hpet" present="no"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  </clock>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <cpu mode="host-model" match="exact">
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  </cpu>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  <devices>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <disk type="network" device="disk">
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <driver type="raw" cache="none"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <source protocol="rbd" name="vms/bb0dd1df-5930-471c-a79b-b51d83e9431b_disk">
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      </source>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <auth username="openstack">
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      </auth>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <target dev="vda" bus="virtio"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    </disk>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <disk type="network" device="cdrom">
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <driver type="raw" cache="none"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <source protocol="rbd" name="vms/bb0dd1df-5930-471c-a79b-b51d83e9431b_disk.config">
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      </source>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <auth username="openstack">
Oct  9 09:55:18 compute-0 nova_compute[187439]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      </auth>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <target dev="sda" bus="sata"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    </disk>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <interface type="ethernet">
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <mac address="fa:16:3e:9e:ca:a2"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <model type="virtio"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <mtu size="1442"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <target dev="tap5ebc58bd-13"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    </interface>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <serial type="pty">
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <log file="/var/lib/nova/instances/bb0dd1df-5930-471c-a79b-b51d83e9431b/console.log" append="off"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    </serial>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <video>
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <model type="virtio"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    </video>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <input type="tablet" bus="usb"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <rng model="virtio">
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <backend model="random">/dev/urandom</backend>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    </rng>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <controller type="usb" index="0"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    <memballoon model="virtio">
Oct  9 09:55:18 compute-0 nova_compute[187439]:      <stats period="10"/>
Oct  9 09:55:18 compute-0 nova_compute[187439]:    </memballoon>
Oct  9 09:55:18 compute-0 nova_compute[187439]:  </devices>
Oct  9 09:55:18 compute-0 nova_compute[187439]: </domain>
Oct  9 09:55:18 compute-0 nova_compute[187439]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.303 2 DEBUG nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Preparing to wait for external event network-vif-plugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.303 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.303 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.303 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.304 2 DEBUG nova.virt.libvirt.vif [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1027666294',display_name='tempest-TestNetworkBasicOps-server-1027666294',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1027666294',id=1,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbqXXzO4EL6O7qoSjI6lvqp48ZKfLgqTsWRFa/6Ez5EN4tUY5bL3HEiWU6aomP3iRdq/9JJnaMZ+I5jCxjRHt6+P+gstplvEf4nanxNL34YzLOWaL1PMwWFpFUmL3vFew==',key_name='tempest-TestNetworkBasicOps-219100258',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-2454p41c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:55:12Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=bb0dd1df-5930-471c-a79b-b51d83e9431b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.304 2 DEBUG nova.network.os_vif_util [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.304 2 DEBUG nova.network.os_vif_util [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:ca:a2,bridge_name='br-int',has_traffic_filtering=True,id=5ebc58bd-1327-457d-a25b-9c56c1001f06,network=Network(55d0b606-ef1d-4562-907e-2ce1c8e82d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ebc58bd-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.305 2 DEBUG os_vif [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:ca:a2,bridge_name='br-int',has_traffic_filtering=True,id=5ebc58bd-1327-457d-a25b-9c56c1001f06,network=Network(55d0b606-ef1d-4562-907e-2ce1c8e82d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ebc58bd-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.334 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.334 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.335 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.336 2 DEBUG nova.network.neutron [req-662f04d8-9300-4211-b06f-4988246fa63a req-da1d3610-3cc3-429f-8fd6-2f787fc4bb12 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Updated VIF entry in instance network info cache for port 5ebc58bd-1327-457d-a25b-9c56c1001f06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.337 2 DEBUG nova.network.neutron [req-662f04d8-9300-4211-b06f-4988246fa63a req-da1d3610-3cc3-429f-8fd6-2f787fc4bb12 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Updating instance_info_cache with network_info: [{"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.348 2 DEBUG oslo_concurrency.lockutils [req-662f04d8-9300-4211-b06f-4988246fa63a req-da1d3610-3cc3-429f-8fd6-2f787fc4bb12 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.351 2 INFO oslo.privsep.daemon [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpkfa2yl7o/privsep.sock']#033[00m
Oct  9 09:55:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v662: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Oct  9 09:55:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:18.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:18.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:18.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:18.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.966 2 INFO oslo.privsep.daemon [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.854 523 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.859 523 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.861 523 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  9 09:55:18 compute-0 nova_compute[187439]: 2025-10-09 09:55:18.861 523 INFO oslo.privsep.daemon [-] privsep daemon running as pid 523#033[00m
Oct  9 09:55:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:19 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:55:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:19.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.251 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ebc58bd-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.252 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ebc58bd-13, col_values=(('external_ids', {'iface-id': '5ebc58bd-1327-457d-a25b-9c56c1001f06', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:ca:a2', 'vm-uuid': 'bb0dd1df-5930-471c-a79b-b51d83e9431b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:19 compute-0 NetworkManager[982]: <info>  [1760003719.2549] manager: (tap5ebc58bd-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.262 2 INFO os_vif [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:ca:a2,bridge_name='br-int',has_traffic_filtering=True,id=5ebc58bd-1327-457d-a25b-9c56c1001f06,network=Network(55d0b606-ef1d-4562-907e-2ce1c8e82d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ebc58bd-13')#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.295 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.295 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.295 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:9e:ca:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.296 2 INFO nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Using config drive#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.315 2 DEBUG nova.storage.rbd_utils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image bb0dd1df-5930-471c-a79b-b51d83e9431b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:55:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:55:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:55:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:55:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:19.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:55:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:55:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:55:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.764 2 INFO nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Creating config drive at /var/lib/nova/instances/bb0dd1df-5930-471c-a79b-b51d83e9431b/disk.config#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.769 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb0dd1df-5930-471c-a79b-b51d83e9431b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3_kizza5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.895 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb0dd1df-5930-471c-a79b-b51d83e9431b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3_kizza5" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.923 2 DEBUG nova.storage.rbd_utils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image bb0dd1df-5930-471c-a79b-b51d83e9431b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:19 compute-0 nova_compute[187439]: 2025-10-09 09:55:19.926 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb0dd1df-5930-471c-a79b-b51d83e9431b/disk.config bb0dd1df-5930-471c-a79b-b51d83e9431b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:20 compute-0 nova_compute[187439]: 2025-10-09 09:55:20.044 2 DEBUG oslo_concurrency.processutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb0dd1df-5930-471c-a79b-b51d83e9431b/disk.config bb0dd1df-5930-471c-a79b-b51d83e9431b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:20 compute-0 nova_compute[187439]: 2025-10-09 09:55:20.045 2 INFO nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Deleting local config drive /var/lib/nova/instances/bb0dd1df-5930-471c-a79b-b51d83e9431b/disk.config because it was imported into RBD.#033[00m
Oct  9 09:55:20 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct  9 09:55:20 compute-0 systemd[1]: Started libvirt secret daemon.
Oct  9 09:55:20 compute-0 podman[192746]: 2025-10-09 09:55:20.152794245 +0000 UTC m=+0.061837393 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  9 09:55:20 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  9 09:55:20 compute-0 kernel: tap5ebc58bd-13: entered promiscuous mode
Oct  9 09:55:20 compute-0 NetworkManager[982]: <info>  [1760003720.1592] manager: (tap5ebc58bd-13): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct  9 09:55:20 compute-0 ovn_controller[83056]: 2025-10-09T09:55:20Z|00027|binding|INFO|Claiming lport 5ebc58bd-1327-457d-a25b-9c56c1001f06 for this chassis.
Oct  9 09:55:20 compute-0 ovn_controller[83056]: 2025-10-09T09:55:20Z|00028|binding|INFO|5ebc58bd-1327-457d-a25b-9c56c1001f06: Claiming fa:16:3e:9e:ca:a2 10.100.0.9
Oct  9 09:55:20 compute-0 nova_compute[187439]: 2025-10-09 09:55:20.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.170 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:ca:a2 10.100.0.9'], port_security=['fa:16:3e:9e:ca:a2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bb0dd1df-5930-471c-a79b-b51d83e9431b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55d0b606-ef1d-4562-907e-2ce1c8e82d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5424a1d4-c7c5-4d79-af3c-b3e024a88ed4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f595f2ef-2be6-42da-a1e0-cbeb250a9fb9, chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], logical_port=5ebc58bd-1327-457d-a25b-9c56c1001f06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.171 92053 INFO neutron.agent.ovn.metadata.agent [-] Port 5ebc58bd-1327-457d-a25b-9c56c1001f06 in datapath 55d0b606-ef1d-4562-907e-2ce1c8e82d1a bound to our chassis#033[00m
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.174 92053 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55d0b606-ef1d-4562-907e-2ce1c8e82d1a#033[00m
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.175 92053 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpzqwh_al9/privsep.sock']#033[00m
Oct  9 09:55:20 compute-0 systemd-udevd[192798]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:55:20 compute-0 systemd-machined[143379]: New machine qemu-1-instance-00000001.
Oct  9 09:55:20 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct  9 09:55:20 compute-0 NetworkManager[982]: <info>  [1760003720.2347] device (tap5ebc58bd-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:55:20 compute-0 NetworkManager[982]: <info>  [1760003720.2353] device (tap5ebc58bd-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 09:55:20 compute-0 nova_compute[187439]: 2025-10-09 09:55:20.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:20 compute-0 ovn_controller[83056]: 2025-10-09T09:55:20Z|00029|binding|INFO|Setting lport 5ebc58bd-1327-457d-a25b-9c56c1001f06 ovn-installed in OVS
Oct  9 09:55:20 compute-0 ovn_controller[83056]: 2025-10-09T09:55:20Z|00030|binding|INFO|Setting lport 5ebc58bd-1327-457d-a25b-9c56c1001f06 up in Southbound
Oct  9 09:55:20 compute-0 nova_compute[187439]: 2025-10-09 09:55:20.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:20 compute-0 nova_compute[187439]: 2025-10-09 09:55:20.399 2 DEBUG nova.compute.manager [req-d3a053ac-d957-4a79-943d-d468ad9a9cc5 req-16cb62aa-fb61-4c6e-b05e-7feeb243c7d1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received event network-vif-plugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:55:20 compute-0 nova_compute[187439]: 2025-10-09 09:55:20.400 2 DEBUG oslo_concurrency.lockutils [req-d3a053ac-d957-4a79-943d-d468ad9a9cc5 req-16cb62aa-fb61-4c6e-b05e-7feeb243c7d1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:20 compute-0 nova_compute[187439]: 2025-10-09 09:55:20.412 2 DEBUG oslo_concurrency.lockutils [req-d3a053ac-d957-4a79-943d-d468ad9a9cc5 req-16cb62aa-fb61-4c6e-b05e-7feeb243c7d1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:20 compute-0 nova_compute[187439]: 2025-10-09 09:55:20.412 2 DEBUG oslo_concurrency.lockutils [req-d3a053ac-d957-4a79-943d-d468ad9a9cc5 req-16cb62aa-fb61-4c6e-b05e-7feeb243c7d1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:20 compute-0 nova_compute[187439]: 2025-10-09 09:55:20.414 2 DEBUG nova.compute.manager [req-d3a053ac-d957-4a79-943d-d468ad9a9cc5 req-16cb62aa-fb61-4c6e-b05e-7feeb243c7d1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Processing event network-vif-plugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 09:55:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v663: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 64 op/s
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.818 92053 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.819 92053 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpzqwh_al9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.716 192856 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.720 192856 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.722 192856 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.722 192856 INFO oslo.privsep.daemon [-] privsep daemon running as pid 192856#033[00m
Oct  9 09:55:20 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:20.823 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[2afd0f6f-610a-4bf7-9727-688130c318cd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.002 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760003721.0009472, bb0dd1df-5930-471c-a79b-b51d83e9431b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.003 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] VM Started (Lifecycle Event)#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.005 2 DEBUG nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.009 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.012 2 INFO nova.virt.libvirt.driver [-] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Instance spawned successfully.#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.012 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.035 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.038 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.054 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.054 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760003721.0010993, bb0dd1df-5930-471c-a79b-b51d83e9431b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.055 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] VM Paused (Lifecycle Event)#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.064 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.064 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.065 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.065 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.065 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.066 2 DEBUG nova.virt.libvirt.driver [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.068 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.073 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760003721.0086815, bb0dd1df-5930-471c-a79b-b51d83e9431b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.074 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] VM Resumed (Lifecycle Event)#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.088 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.090 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.106 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.113 2 INFO nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Took 8.88 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.114 2 DEBUG nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:55:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:21.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.165 2 INFO nova.compute.manager [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Took 9.62 seconds to build instance.#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.180 2 DEBUG oslo_concurrency.lockutils [None req-6e1f0ca7-dfb4-4f69-94b7-53d95228521c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct  9 09:55:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct  9 09:55:21 compute-0 ceph-mon[4497]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct  9 09:55:21 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:21.439 192856 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:21 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:21.441 192856 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:21 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:21.444 192856 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:21 compute-0 nova_compute[187439]: 2025-10-09 09:55:21.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:21 compute-0 podman[192861]: 2025-10-09 09:55:21.61349862 +0000 UTC m=+0.052530463 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 09:55:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:21.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.143 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5b8166-1668-45cc-95c2-d06ea8998c3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.144 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap55d0b606-e1 in ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.147 192856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap55d0b606-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.147 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[23d28d7f-f5af-44cb-ae4b-b9e41cc324f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.151 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[73c58eb6-ff81-45bf-8f1d-5b34faaf6c7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.178 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[f85ce9de-1121-4ff9-84f4-48cc4c126016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.208 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[58e8e4c1-703a-4bc0-966c-de369f7e26a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.210 92053 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpj9blltd5/privsep.sock']#033[00m
Oct  9 09:55:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:22] "GET /metrics HTTP/1.1" 200 48499 "" "Prometheus/2.51.0"
Oct  9 09:55:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:22] "GET /metrics HTTP/1.1" 200 48499 "" "Prometheus/2.51.0"
Oct  9 09:55:22 compute-0 nova_compute[187439]: 2025-10-09 09:55:22.480 2 DEBUG nova.compute.manager [req-2b3b552c-b097-4d32-a8ea-5be926df9f84 req-2e004b52-1609-47b3-8485-fcd853ab239c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received event network-vif-plugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:55:22 compute-0 nova_compute[187439]: 2025-10-09 09:55:22.481 2 DEBUG oslo_concurrency.lockutils [req-2b3b552c-b097-4d32-a8ea-5be926df9f84 req-2e004b52-1609-47b3-8485-fcd853ab239c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:22 compute-0 nova_compute[187439]: 2025-10-09 09:55:22.481 2 DEBUG oslo_concurrency.lockutils [req-2b3b552c-b097-4d32-a8ea-5be926df9f84 req-2e004b52-1609-47b3-8485-fcd853ab239c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:22 compute-0 nova_compute[187439]: 2025-10-09 09:55:22.481 2 DEBUG oslo_concurrency.lockutils [req-2b3b552c-b097-4d32-a8ea-5be926df9f84 req-2e004b52-1609-47b3-8485-fcd853ab239c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:22 compute-0 nova_compute[187439]: 2025-10-09 09:55:22.482 2 DEBUG nova.compute.manager [req-2b3b552c-b097-4d32-a8ea-5be926df9f84 req-2e004b52-1609-47b3-8485-fcd853ab239c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] No waiting events found dispatching network-vif-plugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:55:22 compute-0 nova_compute[187439]: 2025-10-09 09:55:22.482 2 WARNING nova.compute.manager [req-2b3b552c-b097-4d32-a8ea-5be926df9f84 req-2e004b52-1609-47b3-8485-fcd853ab239c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received unexpected event network-vif-plugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 for instance with vm_state active and task_state None.#033[00m
Oct  9 09:55:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v665: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 65 op/s
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.827 92053 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.827 92053 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpj9blltd5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.721 192891 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.727 192891 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.729 192891 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.732 192891 INFO oslo.privsep.daemon [-] privsep daemon running as pid 192891#033[00m
Oct  9 09:55:22 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:22.830 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[3e03b7b7-0a9a-4940-87d4-fd6d321a2a31]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:23.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.314 192891 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.315 192891 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.315 192891 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:55:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:23.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.863 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[12df44e1-47ca-479a-8cc9-6e6be321698a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:23 compute-0 NetworkManager[982]: <info>  [1760003723.8740] manager: (tap55d0b606-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.876 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[dc1a577c-681d-43ce-a828-9565711d2187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:23 compute-0 systemd-udevd[192901]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.896 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[fba6597a-bd8d-4baa-aedd-3bbe4f5c4420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.899 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[df2ca6a4-f9b8-4ef5-a9a3-31aff3da6e15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:23 compute-0 NetworkManager[982]: <info>  [1760003723.9211] device (tap55d0b606-e0): carrier: link connected
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.925 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a364b3-9b80-470d-aa91-48e52f16560f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.943 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[a15c6ff7-bd0e-4cd2-b133-7face91d4ec0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55d0b606-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:a5:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 140845, 'reachable_time': 35681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 192912, 'error': None, 'target': 'ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.963 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad9e65b-7caa-4537-9e83-bf70fb303737]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe85:a5c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 140845, 'tstamp': 140845}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 192914, 'error': None, 'target': 'ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:23.986 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[f05edeaf-9b10-47de-8975-6432f95a1fb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55d0b606-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:a5:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 140845, 'reachable_time': 35681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 192915, 'error': None, 'target': 'ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:24.030 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[ec400d5b-89c3-4f7a-855a-526062e62341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:24.098 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[b4dd53cd-2c87-4f4d-af1f-2c3f570fc714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:24.100 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55d0b606-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:24.100 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:24.101 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55d0b606-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:24 compute-0 NetworkManager[982]: <info>  [1760003724.1041] manager: (tap55d0b606-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct  9 09:55:24 compute-0 kernel: tap55d0b606-e0: entered promiscuous mode
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:24.107 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55d0b606-e0, col_values=(('external_ids', {'iface-id': '4ce3dd88-4506-4d4b-8422-e06959275853'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:24 compute-0 ovn_controller[83056]: 2025-10-09T09:55:24Z|00031|binding|INFO|Releasing lport 4ce3dd88-4506-4d4b-8422-e06959275853 from this chassis (sb_readonly=0)
Oct  9 09:55:24 compute-0 ovn_controller[83056]: 2025-10-09T09:55:24Z|00032|binding|INFO|Releasing lport 4ce3dd88-4506-4d4b-8422-e06959275853 from this chassis (sb_readonly=0)
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:24 compute-0 NetworkManager[982]: <info>  [1760003724.1145] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct  9 09:55:24 compute-0 NetworkManager[982]: <info>  [1760003724.1148] device (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:55:24 compute-0 NetworkManager[982]: <info>  [1760003724.1158] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct  9 09:55:24 compute-0 NetworkManager[982]: <info>  [1760003724.1162] device (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:55:24 compute-0 NetworkManager[982]: <info>  [1760003724.1172] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct  9 09:55:24 compute-0 NetworkManager[982]: <info>  [1760003724.1178] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct  9 09:55:24 compute-0 NetworkManager[982]: <info>  [1760003724.1182] device (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:55:24 compute-0 NetworkManager[982]: <info>  [1760003724.1186] device (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:24.177 92053 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/55d0b606-ef1d-4562-907e-2ce1c8e82d1a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/55d0b606-ef1d-4562-907e-2ce1c8e82d1a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 09:55:24 compute-0 ovn_controller[83056]: 2025-10-09T09:55:24Z|00033|binding|INFO|Releasing lport 4ce3dd88-4506-4d4b-8422-e06959275853 from this chassis (sb_readonly=0)
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:24.181 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[478260e5-371b-4a9e-8c98-074bb75fd23f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:24.185 92053 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: global
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    log         /dev/log local0 debug
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    log-tag     haproxy-metadata-proxy-55d0b606-ef1d-4562-907e-2ce1c8e82d1a
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    user        root
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    group       root
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    maxconn     1024
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    pidfile     /var/lib/neutron/external/pids/55d0b606-ef1d-4562-907e-2ce1c8e82d1a.pid.haproxy
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    daemon
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: defaults
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    log global
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    mode http
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    option httplog
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    option dontlognull
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    option http-server-close
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    option forwardfor
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    retries                 3
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    timeout http-request    30s
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    timeout connect         30s
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    timeout client          32s
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    timeout server          32s
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    timeout http-keep-alive 30s
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: listen listener
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    bind 169.254.169.254:80
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]:    http-request add-header X-OVN-Network-ID 55d0b606-ef1d-4562-907e-2ce1c8e82d1a
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 09:55:24 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:24.188 92053 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a', 'env', 'PROCESS_TAG=haproxy-55d0b606-ef1d-4562-907e-2ce1c8e82d1a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/55d0b606-ef1d-4562-907e-2ce1c8e82d1a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:24 compute-0 podman[192947]: 2025-10-09 09:55:24.527269395 +0000 UTC m=+0.042748807 container create 0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 09:55:24 compute-0 systemd[1]: Started libpod-conmon-0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307.scope.
Oct  9 09:55:24 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50981af4961ee6ea7a1afb3ca28e1fc8981c6f852fc8bed01f8855d2bd63f821/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:55:24 compute-0 podman[192947]: 2025-10-09 09:55:24.601075008 +0000 UTC m=+0.116554439 container init 0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:55:24 compute-0 podman[192947]: 2025-10-09 09:55:24.50860612 +0000 UTC m=+0.024085552 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:55:24 compute-0 podman[192947]: 2025-10-09 09:55:24.608851903 +0000 UTC m=+0.124331315 container start 0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  9 09:55:24 compute-0 neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a[192960]: [NOTICE]   (192964) : New worker (192966) forked
Oct  9 09:55:24 compute-0 neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a[192960]: [NOTICE]   (192964) : Loading success.
Oct  9 09:55:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v666: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 MiB/s wr, 47 op/s
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.804 2 DEBUG nova.compute.manager [req-04d4b084-baeb-4ef0-ae14-0bf23218aca2 req-8e12c23a-4ef9-4dec-9a35-c19b71b1f4f5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received event network-changed-5ebc58bd-1327-457d-a25b-9c56c1001f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.804 2 DEBUG nova.compute.manager [req-04d4b084-baeb-4ef0-ae14-0bf23218aca2 req-8e12c23a-4ef9-4dec-9a35-c19b71b1f4f5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Refreshing instance network info cache due to event network-changed-5ebc58bd-1327-457d-a25b-9c56c1001f06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.805 2 DEBUG oslo_concurrency.lockutils [req-04d4b084-baeb-4ef0-ae14-0bf23218aca2 req-8e12c23a-4ef9-4dec-9a35-c19b71b1f4f5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.805 2 DEBUG oslo_concurrency.lockutils [req-04d4b084-baeb-4ef0-ae14-0bf23218aca2 req-8e12c23a-4ef9-4dec-9a35-c19b71b1f4f5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:55:24 compute-0 nova_compute[187439]: 2025-10-09 09:55:24.805 2 DEBUG nova.network.neutron [req-04d4b084-baeb-4ef0-ae14-0bf23218aca2 req-8e12c23a-4ef9-4dec-9a35-c19b71b1f4f5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Refreshing network info cache for port 5ebc58bd-1327-457d-a25b-9c56c1001f06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:55:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:25.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:25.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:26 compute-0 nova_compute[187439]: 2025-10-09 09:55:26.009 2 DEBUG nova.network.neutron [req-04d4b084-baeb-4ef0-ae14-0bf23218aca2 req-8e12c23a-4ef9-4dec-9a35-c19b71b1f4f5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Updated VIF entry in instance network info cache for port 5ebc58bd-1327-457d-a25b-9c56c1001f06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:55:26 compute-0 nova_compute[187439]: 2025-10-09 09:55:26.010 2 DEBUG nova.network.neutron [req-04d4b084-baeb-4ef0-ae14-0bf23218aca2 req-8e12c23a-4ef9-4dec-9a35-c19b71b1f4f5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Updating instance_info_cache with network_info: [{"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:55:26 compute-0 nova_compute[187439]: 2025-10-09 09:55:26.028 2 DEBUG oslo_concurrency.lockutils [req-04d4b084-baeb-4ef0-ae14-0bf23218aca2 req-8e12c23a-4ef9-4dec-9a35-c19b71b1f4f5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:55:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:26 compute-0 nova_compute[187439]: 2025-10-09 09:55:26.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v667: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct  9 09:55:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:27.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:27.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:27.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:27.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:27.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:27.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v668: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct  9 09:55:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:28.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:28.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:28.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:28.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000021s ======
Oct  9 09:55:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:29.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct  9 09:55:29 compute-0 nova_compute[187439]: 2025-10-09 09:55:29.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:29.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v669: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 80 op/s
Oct  9 09:55:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:31.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:31 compute-0 nova_compute[187439]: 2025-10-09 09:55:31.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:31.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:32] "GET /metrics HTTP/1.1" 200 48499 "" "Prometheus/2.51.0"
Oct  9 09:55:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:32] "GET /metrics HTTP/1.1" 200 48499 "" "Prometheus/2.51.0"
Oct  9 09:55:32 compute-0 podman[193006]: 2025-10-09 09:55:32.667258993 +0000 UTC m=+0.104072821 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:55:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v670: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 69 op/s
Oct  9 09:55:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:33.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:33.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:33 compute-0 ovn_controller[83056]: 2025-10-09T09:55:33Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:ca:a2 10.100.0.9
Oct  9 09:55:33 compute-0 ovn_controller[83056]: 2025-10-09T09:55:33Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:ca:a2 10.100.0.9
Oct  9 09:55:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:34 compute-0 nova_compute[187439]: 2025-10-09 09:55:34.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:55:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:55:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v671: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct  9 09:55:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:55:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:35.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:55:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:35.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:36 compute-0 nova_compute[187439]: 2025-10-09 09:55:36.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v672: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct  9 09:55:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:37.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:37.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:37.052Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:37.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:37.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:37.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v673: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  9 09:55:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:38.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:38.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:38.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:38.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:39 compute-0 nova_compute[187439]: 2025-10-09 09:55:39.138 2 INFO nova.compute.manager [None req-28b2d2b3-f15a-41db-a9b8-4523fef43e24 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Get console output#033[00m
Oct  9 09:55:39 compute-0 nova_compute[187439]: 2025-10-09 09:55:39.144 2 INFO oslo.privsep.daemon [None req-28b2d2b3-f15a-41db-a9b8-4523fef43e24 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpc4fwrjvk/privsep.sock']#033[00m
Oct  9 09:55:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:39 compute-0 nova_compute[187439]: 2025-10-09 09:55:39.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:39.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:39 compute-0 nova_compute[187439]: 2025-10-09 09:55:39.719 2 INFO oslo.privsep.daemon [None req-28b2d2b3-f15a-41db-a9b8-4523fef43e24 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:55:39 compute-0 nova_compute[187439]: 2025-10-09 09:55:39.627 589 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:55:39 compute-0 nova_compute[187439]: 2025-10-09 09:55:39.631 589 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:55:39 compute-0 nova_compute[187439]: 2025-10-09 09:55:39.633 589 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  9 09:55:39 compute-0 nova_compute[187439]: 2025-10-09 09:55:39.634 589 INFO oslo.privsep.daemon [-] privsep daemon running as pid 589#033[00m
Oct  9 09:55:39 compute-0 nova_compute[187439]: 2025-10-09 09:55:39.797 589 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 09:55:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v674: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  9 09:55:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:41.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:41 compute-0 nova_compute[187439]: 2025-10-09 09:55:41.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:41.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:42] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 09:55:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:42] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 09:55:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v675: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  9 09:55:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:43.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:43 compute-0 podman[193045]: 2025-10-09 09:55:43.609890899 +0000 UTC m=+0.046575042 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  9 09:55:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:43.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:44 compute-0 nova_compute[187439]: 2025-10-09 09:55:44.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v676: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  9 09:55:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:45.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:45.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:46 compute-0 nova_compute[187439]: 2025-10-09 09:55:46.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:46 compute-0 nova_compute[187439]: 2025-10-09 09:55:46.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  9 09:55:46 compute-0 nova_compute[187439]: 2025-10-09 09:55:46.258 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  9 09:55:46 compute-0 nova_compute[187439]: 2025-10-09 09:55:46.259 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:46 compute-0 nova_compute[187439]: 2025-10-09 09:55:46.259 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  9 09:55:46 compute-0 nova_compute[187439]: 2025-10-09 09:55:46.265 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:46 compute-0 nova_compute[187439]: 2025-10-09 09:55:46.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v677: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  9 09:55:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:47.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:47.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:47.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:47.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:47.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:47.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v678: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct  9 09:55:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:48.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:48.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:48.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:48.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:49.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:49 compute-0 nova_compute[187439]: 2025-10-09 09:55:49.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:55:49
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['.rgw.root', '.nfs', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta', 'volumes']
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:55:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:55:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:55:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:49.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:55:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.271 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.272 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.298 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.298 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.299 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.299 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.299 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:50 compute-0 podman[193090]: 2025-10-09 09:55:50.611048265 +0000 UTC m=+0.049808418 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:55:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:55:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508635897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.654 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.709 2 DEBUG nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.710 2 DEBUG nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 09:55:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v679: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 9 op/s
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.967 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.970 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4569MB free_disk=59.94271469116211GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.970 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:50 compute-0 nova_compute[187439]: 2025-10-09 09:55:50.970 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.053 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Instance bb0dd1df-5930-471c-a79b-b51d83e9431b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.053 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.053 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.083 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Refreshing inventories for resource provider f97cf330-2912-473f-81a8-cda2f8811838 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.118 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Updating ProviderTree inventory for provider f97cf330-2912-473f-81a8-cda2f8811838 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.118 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Updating inventory in ProviderTree for provider f97cf330-2912-473f-81a8-cda2f8811838 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.127 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Refreshing aggregate associations for resource provider f97cf330-2912-473f-81a8-cda2f8811838, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.143 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Refreshing trait associations for resource provider f97cf330-2912-473f-81a8-cda2f8811838, traits: HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX2,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX512VAES,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSSE3,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.167 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:55:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:51.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:55:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.561 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.565 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Updating inventory in ProviderTree for provider f97cf330-2912-473f-81a8-cda2f8811838 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.599 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Updated inventory for provider f97cf330-2912-473f-81a8-cda2f8811838 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.600 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Updating resource provider f97cf330-2912-473f-81a8-cda2f8811838 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.600 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Updating inventory in ProviderTree for provider f97cf330-2912-473f-81a8-cda2f8811838 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.613 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.613 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:51 compute-0 nova_compute[187439]: 2025-10-09 09:55:51.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:51.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:52] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 09:55:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:55:52] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 09:55:52 compute-0 nova_compute[187439]: 2025-10-09 09:55:52.583 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:52 compute-0 nova_compute[187439]: 2025-10-09 09:55:52.583 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:52 compute-0 nova_compute[187439]: 2025-10-09 09:55:52.583 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:55:52 compute-0 nova_compute[187439]: 2025-10-09 09:55:52.584 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:55:52 compute-0 podman[193158]: 2025-10-09 09:55:52.618764802 +0000 UTC m=+0.055111731 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 09:55:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v680: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 KiB/s wr, 8 op/s
Oct  9 09:55:52 compute-0 nova_compute[187439]: 2025-10-09 09:55:52.910 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:55:52 compute-0 nova_compute[187439]: 2025-10-09 09:55:52.911 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquired lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:55:52 compute-0 nova_compute[187439]: 2025-10-09 09:55:52.911 2 DEBUG nova.network.neutron [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  9 09:55:52 compute-0 nova_compute[187439]: 2025-10-09 09:55:52.911 2 DEBUG nova.objects.instance [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb0dd1df-5930-471c-a79b-b51d83e9431b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:55:53 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:53.198 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:55:53 compute-0 nova_compute[187439]: 2025-10-09 09:55:53.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:53 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:55:53.201 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:55:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:53.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:53.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:53 compute-0 nova_compute[187439]: 2025-10-09 09:55:53.884 2 DEBUG nova.network.neutron [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Updating instance_info_cache with network_info: [{"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:55:53 compute-0 nova_compute[187439]: 2025-10-09 09:55:53.896 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Releasing lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:55:53 compute-0 nova_compute[187439]: 2025-10-09 09:55:53.896 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  9 09:55:53 compute-0 nova_compute[187439]: 2025-10-09 09:55:53.897 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:53 compute-0 nova_compute[187439]: 2025-10-09 09:55:53.897 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:53 compute-0 nova_compute[187439]: 2025-10-09 09:55:53.897 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:53 compute-0 nova_compute[187439]: 2025-10-09 09:55:53.898 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:53 compute-0 nova_compute[187439]: 2025-10-09 09:55:53.898 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:53 compute-0 nova_compute[187439]: 2025-10-09 09:55:53.898 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:55:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:54 compute-0 nova_compute[187439]: 2025-10-09 09:55:54.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v681: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 KiB/s wr, 8 op/s
Oct  9 09:55:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:55.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:55:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:55.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:55:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:56 compute-0 nova_compute[187439]: 2025-10-09 09:55:56.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v682: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct  9 09:55:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:57.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:57.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:57.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:57.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:55:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:57.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:55:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:55:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v683: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct  9 09:55:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:58.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:58.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:58.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:55:58.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:55:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:59.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011057152275835123 of space, bias 1.0, pg target 0.3317145682750537 quantized to 32 (current 32)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:55:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:55:59 compute-0 nova_compute[187439]: 2025-10-09 09:55:59.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:55:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:59.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:00 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:00.204 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v684: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:01.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:56:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v685: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:56:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:56:01 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:56:01 compute-0 nova_compute[187439]: 2025-10-09 09:56:01.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:01.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:02 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:56:02 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:56:02 compute-0 podman[193343]: 2025-10-09 09:56:02.066546904 +0000 UTC m=+0.033630524 container create 8a147bc913e7842c1971ba8685ae58c6a1eb285d00f9a6e9e1b5477b7ab76beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:56:02 compute-0 systemd[1]: Started libpod-conmon-8a147bc913e7842c1971ba8685ae58c6a1eb285d00f9a6e9e1b5477b7ab76beb.scope.
Oct  9 09:56:02 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:56:02 compute-0 podman[193343]: 2025-10-09 09:56:02.126095547 +0000 UTC m=+0.093179167 container init 8a147bc913e7842c1971ba8685ae58c6a1eb285d00f9a6e9e1b5477b7ab76beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:56:02 compute-0 podman[193343]: 2025-10-09 09:56:02.13179331 +0000 UTC m=+0.098876920 container start 8a147bc913e7842c1971ba8685ae58c6a1eb285d00f9a6e9e1b5477b7ab76beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 09:56:02 compute-0 podman[193343]: 2025-10-09 09:56:02.13319806 +0000 UTC m=+0.100281689 container attach 8a147bc913e7842c1971ba8685ae58c6a1eb285d00f9a6e9e1b5477b7ab76beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_swartz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:56:02 compute-0 clever_swartz[193357]: 167 167
Oct  9 09:56:02 compute-0 systemd[1]: libpod-8a147bc913e7842c1971ba8685ae58c6a1eb285d00f9a6e9e1b5477b7ab76beb.scope: Deactivated successfully.
Oct  9 09:56:02 compute-0 conmon[193357]: conmon 8a147bc913e7842c1971 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a147bc913e7842c1971ba8685ae58c6a1eb285d00f9a6e9e1b5477b7ab76beb.scope/container/memory.events
Oct  9 09:56:02 compute-0 podman[193343]: 2025-10-09 09:56:02.139587026 +0000 UTC m=+0.106670656 container died 8a147bc913e7842c1971ba8685ae58c6a1eb285d00f9a6e9e1b5477b7ab76beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_swartz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:56:02 compute-0 podman[193343]: 2025-10-09 09:56:02.051739052 +0000 UTC m=+0.018822682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:56:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-93c5a8309ce7de1d6f683490a55232a32310398d81e87fb6d4581d08fc8b5c79-merged.mount: Deactivated successfully.
Oct  9 09:56:02 compute-0 podman[193343]: 2025-10-09 09:56:02.168037611 +0000 UTC m=+0.135121221 container remove 8a147bc913e7842c1971ba8685ae58c6a1eb285d00f9a6e9e1b5477b7ab76beb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:56:02 compute-0 systemd[1]: libpod-conmon-8a147bc913e7842c1971ba8685ae58c6a1eb285d00f9a6e9e1b5477b7ab76beb.scope: Deactivated successfully.
Oct  9 09:56:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:02] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 09:56:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:02] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 09:56:02 compute-0 podman[193379]: 2025-10-09 09:56:02.324741579 +0000 UTC m=+0.037106788 container create 1479bf3927ce10d997201f98b8288048f3a62bb475b97097241c3ced4d492251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 09:56:02 compute-0 systemd[1]: Started libpod-conmon-1479bf3927ce10d997201f98b8288048f3a62bb475b97097241c3ced4d492251.scope.
Oct  9 09:56:02 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f20c9161c9fb72beda697a52ee8f04cad36e151b877ed4624f21520bae51026/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f20c9161c9fb72beda697a52ee8f04cad36e151b877ed4624f21520bae51026/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f20c9161c9fb72beda697a52ee8f04cad36e151b877ed4624f21520bae51026/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f20c9161c9fb72beda697a52ee8f04cad36e151b877ed4624f21520bae51026/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f20c9161c9fb72beda697a52ee8f04cad36e151b877ed4624f21520bae51026/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:02 compute-0 podman[193379]: 2025-10-09 09:56:02.38681399 +0000 UTC m=+0.099179220 container init 1479bf3927ce10d997201f98b8288048f3a62bb475b97097241c3ced4d492251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:56:02 compute-0 podman[193379]: 2025-10-09 09:56:02.391746321 +0000 UTC m=+0.104111530 container start 1479bf3927ce10d997201f98b8288048f3a62bb475b97097241c3ced4d492251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_franklin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:56:02 compute-0 podman[193379]: 2025-10-09 09:56:02.393328414 +0000 UTC m=+0.105693624 container attach 1479bf3927ce10d997201f98b8288048f3a62bb475b97097241c3ced4d492251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 09:56:02 compute-0 podman[193379]: 2025-10-09 09:56:02.311990404 +0000 UTC m=+0.024355625 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:56:02 compute-0 wizardly_franklin[193392]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:56:02 compute-0 wizardly_franklin[193392]: --> All data devices are unavailable
Oct  9 09:56:02 compute-0 systemd[1]: libpod-1479bf3927ce10d997201f98b8288048f3a62bb475b97097241c3ced4d492251.scope: Deactivated successfully.
Oct  9 09:56:02 compute-0 podman[193379]: 2025-10-09 09:56:02.695941022 +0000 UTC m=+0.408306231 container died 1479bf3927ce10d997201f98b8288048f3a62bb475b97097241c3ced4d492251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:56:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f20c9161c9fb72beda697a52ee8f04cad36e151b877ed4624f21520bae51026-merged.mount: Deactivated successfully.
Oct  9 09:56:02 compute-0 podman[193379]: 2025-10-09 09:56:02.723792959 +0000 UTC m=+0.436158170 container remove 1479bf3927ce10d997201f98b8288048f3a62bb475b97097241c3ced4d492251 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 09:56:02 compute-0 systemd[1]: libpod-conmon-1479bf3927ce10d997201f98b8288048f3a62bb475b97097241c3ced4d492251.scope: Deactivated successfully.
Oct  9 09:56:02 compute-0 podman[193408]: 2025-10-09 09:56:02.829917074 +0000 UTC m=+0.105275827 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  9 09:56:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:03 compute-0 podman[193521]: 2025-10-09 09:56:03.247419785 +0000 UTC m=+0.036895189 container create 42a55710cc7b5efa53a5220416a4af3cf39acf999f957264c2467e39676acebd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:56:03 compute-0 systemd[1]: Started libpod-conmon-42a55710cc7b5efa53a5220416a4af3cf39acf999f957264c2467e39676acebd.scope.
Oct  9 09:56:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:56:03 compute-0 podman[193521]: 2025-10-09 09:56:03.314856232 +0000 UTC m=+0.104331625 container init 42a55710cc7b5efa53a5220416a4af3cf39acf999f957264c2467e39676acebd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_babbage, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:56:03 compute-0 podman[193521]: 2025-10-09 09:56:03.321608805 +0000 UTC m=+0.111084198 container start 42a55710cc7b5efa53a5220416a4af3cf39acf999f957264c2467e39676acebd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:56:03 compute-0 podman[193521]: 2025-10-09 09:56:03.323098954 +0000 UTC m=+0.112574349 container attach 42a55710cc7b5efa53a5220416a4af3cf39acf999f957264c2467e39676acebd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_babbage, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:56:03 compute-0 fervent_babbage[193534]: 167 167
Oct  9 09:56:03 compute-0 systemd[1]: libpod-42a55710cc7b5efa53a5220416a4af3cf39acf999f957264c2467e39676acebd.scope: Deactivated successfully.
Oct  9 09:56:03 compute-0 podman[193521]: 2025-10-09 09:56:03.327109497 +0000 UTC m=+0.116584890 container died 42a55710cc7b5efa53a5220416a4af3cf39acf999f957264c2467e39676acebd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_babbage, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 09:56:03 compute-0 podman[193521]: 2025-10-09 09:56:03.233505108 +0000 UTC m=+0.022980533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:56:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-39809fc7d306c1692bb22733d2111ac8703df5952d3d6639f3d984cb3c4c2053-merged.mount: Deactivated successfully.
Oct  9 09:56:03 compute-0 podman[193521]: 2025-10-09 09:56:03.348640435 +0000 UTC m=+0.138115829 container remove 42a55710cc7b5efa53a5220416a4af3cf39acf999f957264c2467e39676acebd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_babbage, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:56:03 compute-0 systemd[1]: libpod-conmon-42a55710cc7b5efa53a5220416a4af3cf39acf999f957264c2467e39676acebd.scope: Deactivated successfully.
Oct  9 09:56:03 compute-0 podman[193557]: 2025-10-09 09:56:03.503682497 +0000 UTC m=+0.037362592 container create caa0f026f6b76ff0ecb3b0170d029ce3462f0e6088d9b30e5b85acd4d812b8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 09:56:03 compute-0 systemd[1]: Started libpod-conmon-caa0f026f6b76ff0ecb3b0170d029ce3462f0e6088d9b30e5b85acd4d812b8fd.scope.
Oct  9 09:56:03 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e761e0c24ef78a3448abdfce60d04d6daab49531eb6ce1ce3948f17a407b66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e761e0c24ef78a3448abdfce60d04d6daab49531eb6ce1ce3948f17a407b66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e761e0c24ef78a3448abdfce60d04d6daab49531eb6ce1ce3948f17a407b66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e761e0c24ef78a3448abdfce60d04d6daab49531eb6ce1ce3948f17a407b66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:03 compute-0 podman[193557]: 2025-10-09 09:56:03.567532521 +0000 UTC m=+0.101212616 container init caa0f026f6b76ff0ecb3b0170d029ce3462f0e6088d9b30e5b85acd4d812b8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 09:56:03 compute-0 podman[193557]: 2025-10-09 09:56:03.572531096 +0000 UTC m=+0.106211190 container start caa0f026f6b76ff0ecb3b0170d029ce3462f0e6088d9b30e5b85acd4d812b8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:56:03 compute-0 podman[193557]: 2025-10-09 09:56:03.573512948 +0000 UTC m=+0.107193042 container attach caa0f026f6b76ff0ecb3b0170d029ce3462f0e6088d9b30e5b85acd4d812b8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lumiere, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:56:03 compute-0 podman[193557]: 2025-10-09 09:56:03.490583168 +0000 UTC m=+0.024263282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:56:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v686: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Oct  9 09:56:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:03.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:03 compute-0 objective_lumiere[193570]: {
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:    "1": [
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:        {
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "devices": [
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "/dev/loop3"
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            ],
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "lv_name": "ceph_lv0",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "lv_size": "21470642176",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "name": "ceph_lv0",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "tags": {
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.cluster_name": "ceph",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.crush_device_class": "",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.encrypted": "0",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.osd_id": "1",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.type": "block",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.vdo": "0",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:                "ceph.with_tpm": "0"
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            },
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "type": "block",
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:            "vg_name": "ceph_vg0"
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:        }
Oct  9 09:56:03 compute-0 objective_lumiere[193570]:    ]
Oct  9 09:56:03 compute-0 objective_lumiere[193570]: }
Oct  9 09:56:03 compute-0 systemd[1]: libpod-caa0f026f6b76ff0ecb3b0170d029ce3462f0e6088d9b30e5b85acd4d812b8fd.scope: Deactivated successfully.
Oct  9 09:56:03 compute-0 podman[193557]: 2025-10-09 09:56:03.814623329 +0000 UTC m=+0.348303423 container died caa0f026f6b76ff0ecb3b0170d029ce3462f0e6088d9b30e5b85acd4d812b8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:56:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-87e761e0c24ef78a3448abdfce60d04d6daab49531eb6ce1ce3948f17a407b66-merged.mount: Deactivated successfully.
Oct  9 09:56:03 compute-0 podman[193557]: 2025-10-09 09:56:03.845688605 +0000 UTC m=+0.379368699 container remove caa0f026f6b76ff0ecb3b0170d029ce3462f0e6088d9b30e5b85acd4d812b8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lumiere, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:56:03 compute-0 systemd[1]: libpod-conmon-caa0f026f6b76ff0ecb3b0170d029ce3462f0e6088d9b30e5b85acd4d812b8fd.scope: Deactivated successfully.
Oct  9 09:56:04 compute-0 nova_compute[187439]: 2025-10-09 09:56:04.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:04 compute-0 podman[193670]: 2025-10-09 09:56:04.406215509 +0000 UTC m=+0.035935360 container create 13583a7b8cd0d0f828f95a0c2960f88e6714b533253ef92d2984ad2e7e8874f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 09:56:04 compute-0 systemd[1]: Started libpod-conmon-13583a7b8cd0d0f828f95a0c2960f88e6714b533253ef92d2984ad2e7e8874f5.scope.
Oct  9 09:56:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:56:04 compute-0 podman[193670]: 2025-10-09 09:56:04.462229859 +0000 UTC m=+0.091949720 container init 13583a7b8cd0d0f828f95a0c2960f88e6714b533253ef92d2984ad2e7e8874f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:56:04 compute-0 podman[193670]: 2025-10-09 09:56:04.467217032 +0000 UTC m=+0.096936873 container start 13583a7b8cd0d0f828f95a0c2960f88e6714b533253ef92d2984ad2e7e8874f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 09:56:04 compute-0 podman[193670]: 2025-10-09 09:56:04.469064506 +0000 UTC m=+0.098784367 container attach 13583a7b8cd0d0f828f95a0c2960f88e6714b533253ef92d2984ad2e7e8874f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 09:56:04 compute-0 practical_brown[193684]: 167 167
Oct  9 09:56:04 compute-0 systemd[1]: libpod-13583a7b8cd0d0f828f95a0c2960f88e6714b533253ef92d2984ad2e7e8874f5.scope: Deactivated successfully.
Oct  9 09:56:04 compute-0 podman[193670]: 2025-10-09 09:56:04.473730594 +0000 UTC m=+0.103450445 container died 13583a7b8cd0d0f828f95a0c2960f88e6714b533253ef92d2984ad2e7e8874f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:56:04 compute-0 podman[193670]: 2025-10-09 09:56:04.393421535 +0000 UTC m=+0.023141407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2263d4a60ff0fe4eb438d0cf275368e17c55d22dd304574a484243743861a544-merged.mount: Deactivated successfully.
Oct  9 09:56:04 compute-0 podman[193670]: 2025-10-09 09:56:04.501521897 +0000 UTC m=+0.131241747 container remove 13583a7b8cd0d0f828f95a0c2960f88e6714b533253ef92d2984ad2e7e8874f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:56:04 compute-0 systemd[1]: libpod-conmon-13583a7b8cd0d0f828f95a0c2960f88e6714b533253ef92d2984ad2e7e8874f5.scope: Deactivated successfully.
Oct  9 09:56:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:56:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:56:04 compute-0 podman[193707]: 2025-10-09 09:56:04.673059172 +0000 UTC m=+0.038001406 container create 47a03f36935cfb1bca4a9f2a02e17adc38979c799194fad4004ec2d38c052189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 09:56:04 compute-0 systemd[1]: Started libpod-conmon-47a03f36935cfb1bca4a9f2a02e17adc38979c799194fad4004ec2d38c052189.scope.
Oct  9 09:56:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/369bd998e36e2e2a6517faf23ab7dfbd32ec4dcfaa417aff82d34857192378b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/369bd998e36e2e2a6517faf23ab7dfbd32ec4dcfaa417aff82d34857192378b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/369bd998e36e2e2a6517faf23ab7dfbd32ec4dcfaa417aff82d34857192378b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/369bd998e36e2e2a6517faf23ab7dfbd32ec4dcfaa417aff82d34857192378b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:04 compute-0 podman[193707]: 2025-10-09 09:56:04.741294927 +0000 UTC m=+0.106237180 container init 47a03f36935cfb1bca4a9f2a02e17adc38979c799194fad4004ec2d38c052189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:56:04 compute-0 podman[193707]: 2025-10-09 09:56:04.747504735 +0000 UTC m=+0.112446968 container start 47a03f36935cfb1bca4a9f2a02e17adc38979c799194fad4004ec2d38c052189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:56:04 compute-0 podman[193707]: 2025-10-09 09:56:04.748937016 +0000 UTC m=+0.113879259 container attach 47a03f36935cfb1bca4a9f2a02e17adc38979c799194fad4004ec2d38c052189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:56:04 compute-0 podman[193707]: 2025-10-09 09:56:04.658637659 +0000 UTC m=+0.023579902 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:56:05 compute-0 nova_compute[187439]: 2025-10-09 09:56:05.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:05 compute-0 priceless_solomon[193720]: {}
Oct  9 09:56:05 compute-0 lvm[193797]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:56:05 compute-0 lvm[193797]: VG ceph_vg0 finished
Oct  9 09:56:05 compute-0 systemd[1]: libpod-47a03f36935cfb1bca4a9f2a02e17adc38979c799194fad4004ec2d38c052189.scope: Deactivated successfully.
Oct  9 09:56:05 compute-0 podman[193798]: 2025-10-09 09:56:05.396157999 +0000 UTC m=+0.024622099 container died 47a03f36935cfb1bca4a9f2a02e17adc38979c799194fad4004ec2d38c052189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Oct  9 09:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-369bd998e36e2e2a6517faf23ab7dfbd32ec4dcfaa417aff82d34857192378b0-merged.mount: Deactivated successfully.
Oct  9 09:56:05 compute-0 podman[193798]: 2025-10-09 09:56:05.421013425 +0000 UTC m=+0.049477504 container remove 47a03f36935cfb1bca4a9f2a02e17adc38979c799194fad4004ec2d38c052189 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_solomon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 09:56:05 compute-0 systemd[1]: libpod-conmon-47a03f36935cfb1bca4a9f2a02e17adc38979c799194fad4004ec2d38c052189.scope: Deactivated successfully.
Oct  9 09:56:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:56:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:56:05 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v687: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Oct  9 09:56:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:05.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:06 compute-0 nova_compute[187439]: 2025-10-09 09:56:06.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:07.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:07.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:07.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:07.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:07.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:07 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v688: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 83 op/s
Oct  9 09:56:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:07.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:08 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  9 09:56:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:08.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:08.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:08.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:08.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:09.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:09 compute-0 nova_compute[187439]: 2025-10-09 09:56:09.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:09 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v689: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 83 op/s
Oct  9 09:56:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:09.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:10.108 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:10.109 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:10.109 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:11.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v690: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.4 MiB/s wr, 67 op/s
Oct  9 09:56:11 compute-0 nova_compute[187439]: 2025-10-09 09:56:11.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:11.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:12] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Oct  9 09:56:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:12] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Oct  9 09:56:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:13.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v691: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct  9 09:56:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:13.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:14 compute-0 nova_compute[187439]: 2025-10-09 09:56:14.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:14 compute-0 podman[193870]: 2025-10-09 09:56:14.62592739 +0000 UTC m=+0.053466767 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:56:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:15.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v692: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 184 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  9 09:56:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:15.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:16 compute-0 nova_compute[187439]: 2025-10-09 09:56:16.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:17.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:17.058Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:17.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:17.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:17.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v693: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct  9 09:56:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:17.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:18.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:18.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:18.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:18.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:19.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:19 compute-0 nova_compute[187439]: 2025-10-09 09:56:19.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:56:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:56:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:56:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:56:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v694: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct  9 09:56:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:56:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:56:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:56:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:56:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:21.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:21 compute-0 podman[193893]: 2025-10-09 09:56:21.602780696 +0000 UTC m=+0.042964423 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  9 09:56:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v695: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.2 MiB/s wr, 232 op/s
Oct  9 09:56:21 compute-0 nova_compute[187439]: 2025-10-09 09:56:21.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:21.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:22] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Oct  9 09:56:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:22] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Oct  9 09:56:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:23.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:23 compute-0 podman[193912]: 2025-10-09 09:56:23.600609688 +0000 UTC m=+0.042699384 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  9 09:56:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v696: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 24 KiB/s wr, 173 op/s
Oct  9 09:56:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:23.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:24 compute-0 nova_compute[187439]: 2025-10-09 09:56:24.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:25 compute-0 ovn_controller[83056]: 2025-10-09T09:56:25Z|00034|binding|INFO|Releasing lport 4ce3dd88-4506-4d4b-8422-e06959275853 from this chassis (sb_readonly=0)
Oct  9 09:56:25 compute-0 nova_compute[187439]: 2025-10-09 09:56:25.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:25.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v697: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 24 KiB/s wr, 173 op/s
Oct  9 09:56:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:25.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:26 compute-0 nova_compute[187439]: 2025-10-09 09:56:26.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:27.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:27.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:27.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:27.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.109 2 DEBUG nova.compute.manager [req-95f0bb81-f132-49d4-8936-86ec25c5c2a5 req-deea1726-3ff5-4091-b09d-5314d683ddcd b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received event network-changed-5ebc58bd-1327-457d-a25b-9c56c1001f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.109 2 DEBUG nova.compute.manager [req-95f0bb81-f132-49d4-8936-86ec25c5c2a5 req-deea1726-3ff5-4091-b09d-5314d683ddcd b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Refreshing instance network info cache due to event network-changed-5ebc58bd-1327-457d-a25b-9c56c1001f06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.109 2 DEBUG oslo_concurrency.lockutils [req-95f0bb81-f132-49d4-8936-86ec25c5c2a5 req-deea1726-3ff5-4091-b09d-5314d683ddcd b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.109 2 DEBUG oslo_concurrency.lockutils [req-95f0bb81-f132-49d4-8936-86ec25c5c2a5 req-deea1726-3ff5-4091-b09d-5314d683ddcd b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.110 2 DEBUG nova.network.neutron [req-95f0bb81-f132-49d4-8936-86ec25c5c2a5 req-deea1726-3ff5-4091-b09d-5314d683ddcd b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Refreshing network info cache for port 5ebc58bd-1327-457d-a25b-9c56c1001f06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.148 2 DEBUG oslo_concurrency.lockutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "bb0dd1df-5930-471c-a79b-b51d83e9431b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.148 2 DEBUG oslo_concurrency.lockutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.148 2 DEBUG oslo_concurrency.lockutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.149 2 DEBUG oslo_concurrency.lockutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.149 2 DEBUG oslo_concurrency.lockutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.150 2 INFO nova.compute.manager [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Terminating instance#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.150 2 DEBUG nova.compute.manager [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 09:56:27 compute-0 kernel: tap5ebc58bd-13 (unregistering): left promiscuous mode
Oct  9 09:56:27 compute-0 NetworkManager[982]: <info>  [1760003787.2005] device (tap5ebc58bd-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:56:27 compute-0 ovn_controller[83056]: 2025-10-09T09:56:27Z|00035|binding|INFO|Releasing lport 5ebc58bd-1327-457d-a25b-9c56c1001f06 from this chassis (sb_readonly=0)
Oct  9 09:56:27 compute-0 ovn_controller[83056]: 2025-10-09T09:56:27Z|00036|binding|INFO|Setting lport 5ebc58bd-1327-457d-a25b-9c56c1001f06 down in Southbound
Oct  9 09:56:27 compute-0 ovn_controller[83056]: 2025-10-09T09:56:27Z|00037|binding|INFO|Removing iface tap5ebc58bd-13 ovn-installed in OVS
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.213 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:ca:a2 10.100.0.9'], port_security=['fa:16:3e:9e:ca:a2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bb0dd1df-5930-471c-a79b-b51d83e9431b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55d0b606-ef1d-4562-907e-2ce1c8e82d1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5424a1d4-c7c5-4d79-af3c-b3e024a88ed4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f595f2ef-2be6-42da-a1e0-cbeb250a9fb9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], logical_port=5ebc58bd-1327-457d-a25b-9c56c1001f06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.215 92053 INFO neutron.agent.ovn.metadata.agent [-] Port 5ebc58bd-1327-457d-a25b-9c56c1001f06 in datapath 55d0b606-ef1d-4562-907e-2ce1c8e82d1a unbound from our chassis#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.216 92053 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55d0b606-ef1d-4562-907e-2ce1c8e82d1a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.218 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[f15b63e0-be47-4e1b-978e-c5cb883e13af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.218 92053 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a namespace which is not needed anymore#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:27 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct  9 09:56:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:27.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:27 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 13.409s CPU time.
Oct  9 09:56:27 compute-0 systemd-machined[143379]: Machine qemu-1-instance-00000001 terminated.
Oct  9 09:56:27 compute-0 neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a[192960]: [NOTICE]   (192964) : haproxy version is 2.8.14-c23fe91
Oct  9 09:56:27 compute-0 neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a[192960]: [NOTICE]   (192964) : path to executable is /usr/sbin/haproxy
Oct  9 09:56:27 compute-0 neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a[192960]: [ALERT]    (192964) : Current worker (192966) exited with code 143 (Terminated)
Oct  9 09:56:27 compute-0 neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a[192960]: [WARNING]  (192964) : All workers exited. Exiting... (0)
Oct  9 09:56:27 compute-0 systemd[1]: libpod-0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307.scope: Deactivated successfully.
Oct  9 09:56:27 compute-0 conmon[192960]: conmon 0660fb64bad5cda17426 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307.scope/container/memory.events
Oct  9 09:56:27 compute-0 podman[193953]: 2025-10-09 09:56:27.341434115 +0000 UTC m=+0.036889549 container died 0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307-userdata-shm.mount: Deactivated successfully.
Oct  9 09:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-50981af4961ee6ea7a1afb3ca28e1fc8981c6f852fc8bed01f8855d2bd63f821-merged.mount: Deactivated successfully.
Oct  9 09:56:27 compute-0 podman[193953]: 2025-10-09 09:56:27.373249295 +0000 UTC m=+0.068704719 container cleanup 0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.383 2 INFO nova.virt.libvirt.driver [-] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Instance destroyed successfully.#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.385 2 DEBUG nova.objects.instance [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid bb0dd1df-5930-471c-a79b-b51d83e9431b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:56:27 compute-0 systemd[1]: libpod-conmon-0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307.scope: Deactivated successfully.
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.422 2 DEBUG nova.virt.libvirt.vif [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:55:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1027666294',display_name='tempest-TestNetworkBasicOps-server-1027666294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1027666294',id=1,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbqXXzO4EL6O7qoSjI6lvqp48ZKfLgqTsWRFa/6Ez5EN4tUY5bL3HEiWU6aomP3iRdq/9JJnaMZ+I5jCxjRHt6+P+gstplvEf4nanxNL34YzLOWaL1PMwWFpFUmL3vFew==',key_name='tempest-TestNetworkBasicOps-219100258',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:55:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-2454p41c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:55:21Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=bb0dd1df-5930-471c-a79b-b51d83e9431b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.423 2 DEBUG nova.network.os_vif_util [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.424 2 DEBUG nova.network.os_vif_util [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:ca:a2,bridge_name='br-int',has_traffic_filtering=True,id=5ebc58bd-1327-457d-a25b-9c56c1001f06,network=Network(55d0b606-ef1d-4562-907e-2ce1c8e82d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ebc58bd-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.424 2 DEBUG os_vif [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:ca:a2,bridge_name='br-int',has_traffic_filtering=True,id=5ebc58bd-1327-457d-a25b-9c56c1001f06,network=Network(55d0b606-ef1d-4562-907e-2ce1c8e82d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ebc58bd-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.426 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ebc58bd-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.435 2 INFO os_vif [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:ca:a2,bridge_name='br-int',has_traffic_filtering=True,id=5ebc58bd-1327-457d-a25b-9c56c1001f06,network=Network(55d0b606-ef1d-4562-907e-2ce1c8e82d1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ebc58bd-13')#033[00m
Oct  9 09:56:27 compute-0 podman[193988]: 2025-10-09 09:56:27.450943744 +0000 UTC m=+0.047132273 container remove 0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.456 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[797edf42-7d23-48d1-821f-bd98f79db28e]: (4, ('Thu Oct  9 09:56:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a (0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307)\n0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307\nThu Oct  9 09:56:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a (0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307)\n0660fb64bad5cda17426e2fe2b720616850ea2c15728583f82e1debec3d61307\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.460 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e48a32-154c-4151-aac2-dc8a770a6a53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.462 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55d0b606-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:27 compute-0 kernel: tap55d0b606-e0: left promiscuous mode
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.472 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[cd0d2498-7439-4845-948a-a8bc03fd1827]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.492 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce7d37b-b3d5-4b58-a92b-875404a6fc58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.493 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[18810921-e6cb-4776-8bf7-fe7478c390dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.512 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f8e8a9-5791-4735-bf92-518fdfac4445]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 140839, 'reachable_time': 26864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 194020, 'error': None, 'target': 'ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d55d0b606\x2def1d\x2d4562\x2d907e\x2d2ce1c8e82d1a.mount: Deactivated successfully.
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.528 92357 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55d0b606-ef1d-4562-907e-2ce1c8e82d1a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 09:56:27 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:56:27.529 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[a31ff7e5-be55-41fb-b4d9-302d7d19dc7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v698: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 22 KiB/s wr, 172 op/s
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.655 2 INFO nova.virt.libvirt.driver [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Deleting instance files /var/lib/nova/instances/bb0dd1df-5930-471c-a79b-b51d83e9431b_del#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.656 2 INFO nova.virt.libvirt.driver [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Deletion of /var/lib/nova/instances/bb0dd1df-5930-471c-a79b-b51d83e9431b_del complete#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.714 2 DEBUG nova.virt.libvirt.host [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.714 2 INFO nova.virt.libvirt.host [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] UEFI support detected#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.716 2 INFO nova.compute.manager [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.716 2 DEBUG oslo.service.loopingcall [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.716 2 DEBUG nova.compute.manager [-] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.716 2 DEBUG nova.network.neutron [-] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 09:56:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:27.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.748 2 DEBUG nova.compute.manager [req-6948a20d-71e9-4721-a451-a7fd2c4b2d84 req-2bf047f6-79eb-434c-bc78-028add426427 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received event network-vif-unplugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.748 2 DEBUG oslo_concurrency.lockutils [req-6948a20d-71e9-4721-a451-a7fd2c4b2d84 req-2bf047f6-79eb-434c-bc78-028add426427 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.748 2 DEBUG oslo_concurrency.lockutils [req-6948a20d-71e9-4721-a451-a7fd2c4b2d84 req-2bf047f6-79eb-434c-bc78-028add426427 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.749 2 DEBUG oslo_concurrency.lockutils [req-6948a20d-71e9-4721-a451-a7fd2c4b2d84 req-2bf047f6-79eb-434c-bc78-028add426427 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.749 2 DEBUG nova.compute.manager [req-6948a20d-71e9-4721-a451-a7fd2c4b2d84 req-2bf047f6-79eb-434c-bc78-028add426427 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] No waiting events found dispatching network-vif-unplugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:56:27 compute-0 nova_compute[187439]: 2025-10-09 09:56:27.749 2 DEBUG nova.compute.manager [req-6948a20d-71e9-4721-a451-a7fd2c4b2d84 req-2bf047f6-79eb-434c-bc78-028add426427 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received event network-vif-unplugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  9 09:56:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.251 2 DEBUG nova.network.neutron [-] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.261 2 INFO nova.compute.manager [-] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Took 0.54 seconds to deallocate network for instance.#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.295 2 DEBUG oslo_concurrency.lockutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.295 2 DEBUG oslo_concurrency.lockutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.338 2 DEBUG oslo_concurrency.processutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.605 2 DEBUG nova.network.neutron [req-95f0bb81-f132-49d4-8936-86ec25c5c2a5 req-deea1726-3ff5-4091-b09d-5314d683ddcd b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Updated VIF entry in instance network info cache for port 5ebc58bd-1327-457d-a25b-9c56c1001f06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.606 2 DEBUG nova.network.neutron [req-95f0bb81-f132-49d4-8936-86ec25c5c2a5 req-deea1726-3ff5-4091-b09d-5314d683ddcd b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Updating instance_info_cache with network_info: [{"id": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "address": "fa:16:3e:9e:ca:a2", "network": {"id": "55d0b606-ef1d-4562-907e-2ce1c8e82d1a", "bridge": "br-int", "label": "tempest-network-smoke--846843571", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ebc58bd-13", "ovs_interfaceid": "5ebc58bd-1327-457d-a25b-9c56c1001f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.619 2 DEBUG oslo_concurrency.lockutils [req-95f0bb81-f132-49d4-8936-86ec25c5c2a5 req-deea1726-3ff5-4091-b09d-5314d683ddcd b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-bb0dd1df-5930-471c-a79b-b51d83e9431b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:56:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:56:28 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2733332451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.708 2 DEBUG oslo_concurrency.processutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.713 2 DEBUG nova.compute.provider_tree [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.727 2 DEBUG nova.scheduler.client.report [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.739 2 DEBUG oslo_concurrency.lockutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.756 2 INFO nova.scheduler.client.report [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance bb0dd1df-5930-471c-a79b-b51d83e9431b#033[00m
Oct  9 09:56:28 compute-0 nova_compute[187439]: 2025-10-09 09:56:28.798 2 DEBUG oslo_concurrency.lockutils [None req-ba0930ac-ce12-4f4c-a873-da29323b4790 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:28.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:28.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:28.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:28.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:29.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v699: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 11 KiB/s wr, 172 op/s
Oct  9 09:56:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:29.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:29 compute-0 nova_compute[187439]: 2025-10-09 09:56:29.822 2 DEBUG nova.compute.manager [req-f36a9dac-1390-4575-8f0c-25e705d7cd7f req-ef751440-673a-431f-8e09-abda2039d0e4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received event network-vif-plugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:29 compute-0 nova_compute[187439]: 2025-10-09 09:56:29.823 2 DEBUG oslo_concurrency.lockutils [req-f36a9dac-1390-4575-8f0c-25e705d7cd7f req-ef751440-673a-431f-8e09-abda2039d0e4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:29 compute-0 nova_compute[187439]: 2025-10-09 09:56:29.823 2 DEBUG oslo_concurrency.lockutils [req-f36a9dac-1390-4575-8f0c-25e705d7cd7f req-ef751440-673a-431f-8e09-abda2039d0e4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:29 compute-0 nova_compute[187439]: 2025-10-09 09:56:29.823 2 DEBUG oslo_concurrency.lockutils [req-f36a9dac-1390-4575-8f0c-25e705d7cd7f req-ef751440-673a-431f-8e09-abda2039d0e4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "bb0dd1df-5930-471c-a79b-b51d83e9431b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:29 compute-0 nova_compute[187439]: 2025-10-09 09:56:29.823 2 DEBUG nova.compute.manager [req-f36a9dac-1390-4575-8f0c-25e705d7cd7f req-ef751440-673a-431f-8e09-abda2039d0e4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] No waiting events found dispatching network-vif-plugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:56:29 compute-0 nova_compute[187439]: 2025-10-09 09:56:29.823 2 WARNING nova.compute.manager [req-f36a9dac-1390-4575-8f0c-25e705d7cd7f req-ef751440-673a-431f-8e09-abda2039d0e4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received unexpected event network-vif-plugged-5ebc58bd-1327-457d-a25b-9c56c1001f06 for instance with vm_state deleted and task_state None.#033[00m
Oct  9 09:56:29 compute-0 nova_compute[187439]: 2025-10-09 09:56:29.824 2 DEBUG nova.compute.manager [req-f36a9dac-1390-4575-8f0c-25e705d7cd7f req-ef751440-673a-431f-8e09-abda2039d0e4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Received event network-vif-deleted-5ebc58bd-1327-457d-a25b-9c56c1001f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:29 compute-0 nova_compute[187439]: 2025-10-09 09:56:29.824 2 INFO nova.compute.manager [req-f36a9dac-1390-4575-8f0c-25e705d7cd7f req-ef751440-673a-431f-8e09-abda2039d0e4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Neutron deleted interface 5ebc58bd-1327-457d-a25b-9c56c1001f06; detaching it from the instance and deleting it from the info cache#033[00m
Oct  9 09:56:29 compute-0 nova_compute[187439]: 2025-10-09 09:56:29.824 2 DEBUG nova.network.neutron [req-f36a9dac-1390-4575-8f0c-25e705d7cd7f req-ef751440-673a-431f-8e09-abda2039d0e4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  9 09:56:29 compute-0 nova_compute[187439]: 2025-10-09 09:56:29.826 2 DEBUG nova.compute.manager [req-f36a9dac-1390-4575-8f0c-25e705d7cd7f req-ef751440-673a-431f-8e09-abda2039d0e4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Detach interface failed, port_id=5ebc58bd-1327-457d-a25b-9c56c1001f06, reason: Instance bb0dd1df-5930-471c-a79b-b51d83e9431b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  9 09:56:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:31.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v700: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 12 KiB/s wr, 200 op/s
Oct  9 09:56:31 compute-0 nova_compute[187439]: 2025-10-09 09:56:31.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:31.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:32 compute-0 nova_compute[187439]: 2025-10-09 09:56:32.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:32] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Oct  9 09:56:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:32] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Oct  9 09:56:32 compute-0 nova_compute[187439]: 2025-10-09 09:56:32.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:32 compute-0 nova_compute[187439]: 2025-10-09 09:56:32.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:33.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v701: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  9 09:56:33 compute-0 podman[194078]: 2025-10-09 09:56:33.638014045 +0000 UTC m=+0.071106448 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  9 09:56:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:33.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:56:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:56:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:35.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v702: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct  9 09:56:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:35.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:36 compute-0 nova_compute[187439]: 2025-10-09 09:56:36.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:37.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:37.061Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:37.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:37.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:37.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:37 compute-0 nova_compute[187439]: 2025-10-09 09:56:37.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v703: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  9 09:56:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:37.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:38.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:38.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:38.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:38.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:39.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v704: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  9 09:56:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:39.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:41.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v705: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct  9 09:56:41 compute-0 nova_compute[187439]: 2025-10-09 09:56:41.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:41.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:42] "GET /metrics HTTP/1.1" 200 48515 "" "Prometheus/2.51.0"
Oct  9 09:56:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:42] "GET /metrics HTTP/1.1" 200 48515 "" "Prometheus/2.51.0"
Oct  9 09:56:42 compute-0 nova_compute[187439]: 2025-10-09 09:56:42.382 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760003787.3810437, bb0dd1df-5930-471c-a79b-b51d83e9431b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:56:42 compute-0 nova_compute[187439]: 2025-10-09 09:56:42.383 2 INFO nova.compute.manager [-] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] VM Stopped (Lifecycle Event)#033[00m
Oct  9 09:56:42 compute-0 nova_compute[187439]: 2025-10-09 09:56:42.395 2 DEBUG nova.compute.manager [None req-4d3049a1-5b89-494f-985d-0d5c191d12cb - - - - - -] [instance: bb0dd1df-5930-471c-a79b-b51d83e9431b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:56:42 compute-0 nova_compute[187439]: 2025-10-09 09:56:42.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:43.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v706: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:56:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:43.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:45.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:45 compute-0 podman[194113]: 2025-10-09 09:56:45.614921347 +0000 UTC m=+0.055145633 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:56:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v707: 337 pgs: 337 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 336 KiB/s wr, 4 op/s
Oct  9 09:56:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:45.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:46 compute-0 nova_compute[187439]: 2025-10-09 09:56:46.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:47.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:47.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:47.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:47.060Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:47.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:47 compute-0 nova_compute[187439]: 2025-10-09 09:56:47.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v708: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 09:56:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:47.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:48.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:48.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:48.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:48.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:49.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:56:49
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'vms']
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:56:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:56:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v709: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:56:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:56:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:56:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:56:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3169311562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.265 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.265 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:56:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075486111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.634 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.870 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.872 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4754MB free_disk=59.967525482177734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.872 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.872 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.916 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.916 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:56:50 compute-0 nova_compute[187439]: 2025-10-09 09:56:50.927 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:56:51 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705111090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:56:51 compute-0 nova_compute[187439]: 2025-10-09 09:56:51.287 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:51 compute-0 nova_compute[187439]: 2025-10-09 09:56:51.292 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:56:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:51.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:51 compute-0 nova_compute[187439]: 2025-10-09 09:56:51.303 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:56:51 compute-0 nova_compute[187439]: 2025-10-09 09:56:51.316 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:56:51 compute-0 nova_compute[187439]: 2025-10-09 09:56:51.316 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:51 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Check health
Oct  9 09:56:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v710: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  9 09:56:51 compute-0 nova_compute[187439]: 2025-10-09 09:56:51.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:51.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:51 compute-0 podman[194205]: 2025-10-09 09:56:51.782807936 +0000 UTC m=+0.046445638 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 09:56:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:52] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:56:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:56:52] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:56:52 compute-0 nova_compute[187439]: 2025-10-09 09:56:52.316 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:52 compute-0 nova_compute[187439]: 2025-10-09 09:56:52.328 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:52 compute-0 nova_compute[187439]: 2025-10-09 09:56:52.328 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:52 compute-0 nova_compute[187439]: 2025-10-09 09:56:52.328 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:52 compute-0 nova_compute[187439]: 2025-10-09 09:56:52.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:53 compute-0 nova_compute[187439]: 2025-10-09 09:56:53.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:53 compute-0 nova_compute[187439]: 2025-10-09 09:56:53.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:53 compute-0 nova_compute[187439]: 2025-10-09 09:56:53.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:53 compute-0 nova_compute[187439]: 2025-10-09 09:56:53.247 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:56:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:53.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v711: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  9 09:56:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:53.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:54 compute-0 nova_compute[187439]: 2025-10-09 09:56:54.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:54 compute-0 nova_compute[187439]: 2025-10-09 09:56:54.247 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:56:54 compute-0 nova_compute[187439]: 2025-10-09 09:56:54.247 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:56:54 compute-0 nova_compute[187439]: 2025-10-09 09:56:54.259 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:56:54 compute-0 nova_compute[187439]: 2025-10-09 09:56:54.259 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:54 compute-0 podman[194227]: 2025-10-09 09:56:54.606802143 +0000 UTC m=+0.043768199 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Oct  9 09:56:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:55.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v712: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  9 09:56:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:55.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:56 compute-0 nova_compute[187439]: 2025-10-09 09:56:56.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:56:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:57.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:57.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:57.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:57.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:57.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:57 compute-0 nova_compute[187439]: 2025-10-09 09:56:57.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v713: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 99 op/s
Oct  9 09:56:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:57.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:58.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:58.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:58.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:56:58.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:56:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:59.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v714: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  9 09:56:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:56:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:59.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:01.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v715: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct  9 09:57:01 compute-0 nova_compute[187439]: 2025-10-09 09:57:01.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:01.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:02] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:57:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:02] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:57:02 compute-0 nova_compute[187439]: 2025-10-09 09:57:02.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:03.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v716: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  9 09:57:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:03.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:57:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:57:04 compute-0 podman[194254]: 2025-10-09 09:57:04.626775002 +0000 UTC m=+0.068566208 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  9 09:57:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:05.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v717: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  9 09:57:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:05.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:06 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:57:06.085 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:57:06 compute-0 nova_compute[187439]: 2025-10-09 09:57:06.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:06 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:57:06.087 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:57:06 compute-0 ovn_controller[83056]: 2025-10-09T09:57:06Z|00038|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  9 09:57:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:57:06 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:57:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:57:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:57:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v718: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct  9 09:57:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:57:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:57:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:57:06 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:57:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:57:06 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:57:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:57:06 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:57:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:06 compute-0 nova_compute[187439]: 2025-10-09 09:57:06.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:06 compute-0 podman[194440]: 2025-10-09 09:57:06.77720279 +0000 UTC m=+0.038193618 container create 1c06343a90f864e119e3a0f09cc3c6cf1fb7a1f45b85630529298ad2e99025e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 09:57:06 compute-0 systemd[1]: Started libpod-conmon-1c06343a90f864e119e3a0f09cc3c6cf1fb7a1f45b85630529298ad2e99025e1.scope.
Oct  9 09:57:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:57:06 compute-0 podman[194440]: 2025-10-09 09:57:06.846495928 +0000 UTC m=+0.107486776 container init 1c06343a90f864e119e3a0f09cc3c6cf1fb7a1f45b85630529298ad2e99025e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 09:57:06 compute-0 podman[194440]: 2025-10-09 09:57:06.851525822 +0000 UTC m=+0.112516650 container start 1c06343a90f864e119e3a0f09cc3c6cf1fb7a1f45b85630529298ad2e99025e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_elbakyan, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  9 09:57:06 compute-0 podman[194440]: 2025-10-09 09:57:06.852885736 +0000 UTC m=+0.113876564 container attach 1c06343a90f864e119e3a0f09cc3c6cf1fb7a1f45b85630529298ad2e99025e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 09:57:06 compute-0 modest_elbakyan[194453]: 167 167
Oct  9 09:57:06 compute-0 systemd[1]: libpod-1c06343a90f864e119e3a0f09cc3c6cf1fb7a1f45b85630529298ad2e99025e1.scope: Deactivated successfully.
Oct  9 09:57:06 compute-0 podman[194440]: 2025-10-09 09:57:06.761919842 +0000 UTC m=+0.022910691 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:57:06 compute-0 conmon[194453]: conmon 1c06343a90f864e119e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c06343a90f864e119e3a0f09cc3c6cf1fb7a1f45b85630529298ad2e99025e1.scope/container/memory.events
Oct  9 09:57:06 compute-0 podman[194440]: 2025-10-09 09:57:06.857409004 +0000 UTC m=+0.118399833 container died 1c06343a90f864e119e3a0f09cc3c6cf1fb7a1f45b85630529298ad2e99025e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_elbakyan, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 09:57:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-65492cecc6e04ba7e2cb27c6e63e0e77621faf34dfeac8f9ffbded93b0eb8c76-merged.mount: Deactivated successfully.
Oct  9 09:57:06 compute-0 podman[194440]: 2025-10-09 09:57:06.880877487 +0000 UTC m=+0.141868315 container remove 1c06343a90f864e119e3a0f09cc3c6cf1fb7a1f45b85630529298ad2e99025e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_elbakyan, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:57:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:57:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:06 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:57:06 compute-0 systemd[1]: libpod-conmon-1c06343a90f864e119e3a0f09cc3c6cf1fb7a1f45b85630529298ad2e99025e1.scope: Deactivated successfully.
Oct  9 09:57:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:07 compute-0 podman[194475]: 2025-10-09 09:57:07.034750225 +0000 UTC m=+0.042574068 container create 35a4f2c932999e51d6f065b33b02e5ee4093a45272aa9dfb95555afe0023e385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:57:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:07.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:07.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:07.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:07.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:07 compute-0 systemd[1]: Started libpod-conmon-35a4f2c932999e51d6f065b33b02e5ee4093a45272aa9dfb95555afe0023e385.scope.
Oct  9 09:57:07 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b06c3c8aa92e8b6ec4cef06b355c4a1d9f938c7be31ea86ccbd211db6a68da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b06c3c8aa92e8b6ec4cef06b355c4a1d9f938c7be31ea86ccbd211db6a68da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b06c3c8aa92e8b6ec4cef06b355c4a1d9f938c7be31ea86ccbd211db6a68da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b06c3c8aa92e8b6ec4cef06b355c4a1d9f938c7be31ea86ccbd211db6a68da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b06c3c8aa92e8b6ec4cef06b355c4a1d9f938c7be31ea86ccbd211db6a68da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:07 compute-0 podman[194475]: 2025-10-09 09:57:07.102247796 +0000 UTC m=+0.110071649 container init 35a4f2c932999e51d6f065b33b02e5ee4093a45272aa9dfb95555afe0023e385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 09:57:07 compute-0 podman[194475]: 2025-10-09 09:57:07.107074968 +0000 UTC m=+0.114898812 container start 35a4f2c932999e51d6f065b33b02e5ee4093a45272aa9dfb95555afe0023e385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:57:07 compute-0 podman[194475]: 2025-10-09 09:57:07.109242765 +0000 UTC m=+0.117066628 container attach 35a4f2c932999e51d6f065b33b02e5ee4093a45272aa9dfb95555afe0023e385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:57:07 compute-0 podman[194475]: 2025-10-09 09:57:07.021444185 +0000 UTC m=+0.029268028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:57:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:07.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:07 compute-0 musing_ramanujan[194489]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:57:07 compute-0 musing_ramanujan[194489]: --> All data devices are unavailable
Oct  9 09:57:07 compute-0 systemd[1]: libpod-35a4f2c932999e51d6f065b33b02e5ee4093a45272aa9dfb95555afe0023e385.scope: Deactivated successfully.
Oct  9 09:57:07 compute-0 podman[194475]: 2025-10-09 09:57:07.399559719 +0000 UTC m=+0.407383562 container died 35a4f2c932999e51d6f065b33b02e5ee4093a45272aa9dfb95555afe0023e385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:57:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-82b06c3c8aa92e8b6ec4cef06b355c4a1d9f938c7be31ea86ccbd211db6a68da-merged.mount: Deactivated successfully.
Oct  9 09:57:07 compute-0 podman[194475]: 2025-10-09 09:57:07.424703169 +0000 UTC m=+0.432527013 container remove 35a4f2c932999e51d6f065b33b02e5ee4093a45272aa9dfb95555afe0023e385 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ramanujan, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 09:57:07 compute-0 nova_compute[187439]: 2025-10-09 09:57:07.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:07 compute-0 systemd[1]: libpod-conmon-35a4f2c932999e51d6f065b33b02e5ee4093a45272aa9dfb95555afe0023e385.scope: Deactivated successfully.
Oct  9 09:57:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:07.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:07 compute-0 podman[194594]: 2025-10-09 09:57:07.952361337 +0000 UTC m=+0.036363426 container create 42c5a9e6d3399be7283340993bbc3c7e6fb8cb93310ddcd74c439d1a515be5ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hertz, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:57:07 compute-0 systemd[1]: Started libpod-conmon-42c5a9e6d3399be7283340993bbc3c7e6fb8cb93310ddcd74c439d1a515be5ee.scope.
Oct  9 09:57:08 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:57:08 compute-0 podman[194594]: 2025-10-09 09:57:08.016004722 +0000 UTC m=+0.100006811 container init 42c5a9e6d3399be7283340993bbc3c7e6fb8cb93310ddcd74c439d1a515be5ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hertz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:57:08 compute-0 podman[194594]: 2025-10-09 09:57:08.021151847 +0000 UTC m=+0.105153926 container start 42c5a9e6d3399be7283340993bbc3c7e6fb8cb93310ddcd74c439d1a515be5ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hertz, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:57:08 compute-0 podman[194594]: 2025-10-09 09:57:08.022430007 +0000 UTC m=+0.106432086 container attach 42c5a9e6d3399be7283340993bbc3c7e6fb8cb93310ddcd74c439d1a515be5ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hertz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:57:08 compute-0 zealous_hertz[194607]: 167 167
Oct  9 09:57:08 compute-0 systemd[1]: libpod-42c5a9e6d3399be7283340993bbc3c7e6fb8cb93310ddcd74c439d1a515be5ee.scope: Deactivated successfully.
Oct  9 09:57:08 compute-0 podman[194594]: 2025-10-09 09:57:08.025703129 +0000 UTC m=+0.109705218 container died 42c5a9e6d3399be7283340993bbc3c7e6fb8cb93310ddcd74c439d1a515be5ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:57:08 compute-0 podman[194594]: 2025-10-09 09:57:07.939497291 +0000 UTC m=+0.023499380 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-01e237b30724d9b1859450c5dd5306faa192132f093968923e6b2c77bd7b5cf5-merged.mount: Deactivated successfully.
Oct  9 09:57:08 compute-0 podman[194594]: 2025-10-09 09:57:08.048704259 +0000 UTC m=+0.132706338 container remove 42c5a9e6d3399be7283340993bbc3c7e6fb8cb93310ddcd74c439d1a515be5ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:57:08 compute-0 systemd[1]: libpod-conmon-42c5a9e6d3399be7283340993bbc3c7e6fb8cb93310ddcd74c439d1a515be5ee.scope: Deactivated successfully.
Oct  9 09:57:08 compute-0 podman[194630]: 2025-10-09 09:57:08.203466584 +0000 UTC m=+0.041866754 container create afa48e22b366060aac2895597f03d36c80e069083fe97657549cb8baaebfb8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_chaum, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:57:08 compute-0 systemd[1]: Started libpod-conmon-afa48e22b366060aac2895597f03d36c80e069083fe97657549cb8baaebfb8ef.scope.
Oct  9 09:57:08 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed2e721da92c21ad58405c53680e2ce3068202c5a0f90d53f4a2198390eb812/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed2e721da92c21ad58405c53680e2ce3068202c5a0f90d53f4a2198390eb812/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed2e721da92c21ad58405c53680e2ce3068202c5a0f90d53f4a2198390eb812/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed2e721da92c21ad58405c53680e2ce3068202c5a0f90d53f4a2198390eb812/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:08 compute-0 podman[194630]: 2025-10-09 09:57:08.271055027 +0000 UTC m=+0.109455207 container init afa48e22b366060aac2895597f03d36c80e069083fe97657549cb8baaebfb8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:57:08 compute-0 podman[194630]: 2025-10-09 09:57:08.277572516 +0000 UTC m=+0.115972677 container start afa48e22b366060aac2895597f03d36c80e069083fe97657549cb8baaebfb8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_chaum, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  9 09:57:08 compute-0 podman[194630]: 2025-10-09 09:57:08.27897926 +0000 UTC m=+0.117379440 container attach afa48e22b366060aac2895597f03d36c80e069083fe97657549cb8baaebfb8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_chaum, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:57:08 compute-0 podman[194630]: 2025-10-09 09:57:08.188059874 +0000 UTC m=+0.026460034 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:57:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v719: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct  9 09:57:08 compute-0 lucid_chaum[194643]: {
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:    "1": [
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:        {
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "devices": [
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "/dev/loop3"
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            ],
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "lv_name": "ceph_lv0",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "lv_size": "21470642176",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "name": "ceph_lv0",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "tags": {
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.cluster_name": "ceph",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.crush_device_class": "",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.encrypted": "0",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.osd_id": "1",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.type": "block",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.vdo": "0",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:                "ceph.with_tpm": "0"
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            },
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "type": "block",
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:            "vg_name": "ceph_vg0"
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:        }
Oct  9 09:57:08 compute-0 lucid_chaum[194643]:    ]
Oct  9 09:57:08 compute-0 lucid_chaum[194643]: }
Oct  9 09:57:08 compute-0 systemd[1]: libpod-afa48e22b366060aac2895597f03d36c80e069083fe97657549cb8baaebfb8ef.scope: Deactivated successfully.
Oct  9 09:57:08 compute-0 conmon[194643]: conmon afa48e22b366060aac28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-afa48e22b366060aac2895597f03d36c80e069083fe97657549cb8baaebfb8ef.scope/container/memory.events
Oct  9 09:57:08 compute-0 podman[194630]: 2025-10-09 09:57:08.546944878 +0000 UTC m=+0.385345038 container died afa48e22b366060aac2895597f03d36c80e069083fe97657549cb8baaebfb8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 09:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-eed2e721da92c21ad58405c53680e2ce3068202c5a0f90d53f4a2198390eb812-merged.mount: Deactivated successfully.
Oct  9 09:57:08 compute-0 podman[194630]: 2025-10-09 09:57:08.572161976 +0000 UTC m=+0.410562136 container remove afa48e22b366060aac2895597f03d36c80e069083fe97657549cb8baaebfb8ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:57:08 compute-0 systemd[1]: libpod-conmon-afa48e22b366060aac2895597f03d36c80e069083fe97657549cb8baaebfb8ef.scope: Deactivated successfully.
Oct  9 09:57:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:08.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:08.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:08.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:08.896Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:09 compute-0 podman[194744]: 2025-10-09 09:57:09.12925689 +0000 UTC m=+0.036865103 container create 41e5730e5149487b674d5e161158f6878b95aa73077a7ddfb617af34bafc591d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:57:09 compute-0 systemd[1]: Started libpod-conmon-41e5730e5149487b674d5e161158f6878b95aa73077a7ddfb617af34bafc591d.scope.
Oct  9 09:57:09 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:57:09 compute-0 podman[194744]: 2025-10-09 09:57:09.197972468 +0000 UTC m=+0.105580681 container init 41e5730e5149487b674d5e161158f6878b95aa73077a7ddfb617af34bafc591d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:57:09 compute-0 podman[194744]: 2025-10-09 09:57:09.204438149 +0000 UTC m=+0.112046362 container start 41e5730e5149487b674d5e161158f6878b95aa73077a7ddfb617af34bafc591d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:57:09 compute-0 podman[194744]: 2025-10-09 09:57:09.206031173 +0000 UTC m=+0.113639386 container attach 41e5730e5149487b674d5e161158f6878b95aa73077a7ddfb617af34bafc591d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:57:09 compute-0 systemd[1]: libpod-41e5730e5149487b674d5e161158f6878b95aa73077a7ddfb617af34bafc591d.scope: Deactivated successfully.
Oct  9 09:57:09 compute-0 vibrant_agnesi[194757]: 167 167
Oct  9 09:57:09 compute-0 conmon[194757]: conmon 41e5730e5149487b674d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-41e5730e5149487b674d5e161158f6878b95aa73077a7ddfb617af34bafc591d.scope/container/memory.events
Oct  9 09:57:09 compute-0 podman[194744]: 2025-10-09 09:57:09.209669212 +0000 UTC m=+0.117277425 container died 41e5730e5149487b674d5e161158f6878b95aa73077a7ddfb617af34bafc591d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:57:09 compute-0 podman[194744]: 2025-10-09 09:57:09.115333565 +0000 UTC m=+0.022941779 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:57:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b95f75f27db54e67997e0d5ea400b6b96c880a32aabcbaa7cc5ed664f3169320-merged.mount: Deactivated successfully.
Oct  9 09:57:09 compute-0 podman[194744]: 2025-10-09 09:57:09.240913827 +0000 UTC m=+0.148522040 container remove 41e5730e5149487b674d5e161158f6878b95aa73077a7ddfb617af34bafc591d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_agnesi, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:57:09 compute-0 systemd[1]: libpod-conmon-41e5730e5149487b674d5e161158f6878b95aa73077a7ddfb617af34bafc591d.scope: Deactivated successfully.
Oct  9 09:57:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:09.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:09 compute-0 podman[194779]: 2025-10-09 09:57:09.395912338 +0000 UTC m=+0.039047018 container create 4f4c8f6425aa22804acc771783ef47458cf5ed294ca22b4de062a48546c9b782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 09:57:09 compute-0 systemd[1]: Started libpod-conmon-4f4c8f6425aa22804acc771783ef47458cf5ed294ca22b4de062a48546c9b782.scope.
Oct  9 09:57:09 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f83e328c16dd3e719aa873c714f76f6262d7a08ce7a5930ff136c12e83348ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f83e328c16dd3e719aa873c714f76f6262d7a08ce7a5930ff136c12e83348ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f83e328c16dd3e719aa873c714f76f6262d7a08ce7a5930ff136c12e83348ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f83e328c16dd3e719aa873c714f76f6262d7a08ce7a5930ff136c12e83348ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:09 compute-0 podman[194779]: 2025-10-09 09:57:09.465169176 +0000 UTC m=+0.108303876 container init 4f4c8f6425aa22804acc771783ef47458cf5ed294ca22b4de062a48546c9b782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:57:09 compute-0 podman[194779]: 2025-10-09 09:57:09.471056717 +0000 UTC m=+0.114191398 container start 4f4c8f6425aa22804acc771783ef47458cf5ed294ca22b4de062a48546c9b782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:57:09 compute-0 podman[194779]: 2025-10-09 09:57:09.472443183 +0000 UTC m=+0.115577862 container attach 4f4c8f6425aa22804acc771783ef47458cf5ed294ca22b4de062a48546c9b782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 09:57:09 compute-0 podman[194779]: 2025-10-09 09:57:09.382365594 +0000 UTC m=+0.025500294 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:57:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:09.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:10 compute-0 elegant_visvesvaraya[194792]: {}
Oct  9 09:57:10 compute-0 lvm[194870]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:57:10 compute-0 lvm[194870]: VG ceph_vg0 finished
Oct  9 09:57:10 compute-0 systemd[1]: libpod-4f4c8f6425aa22804acc771783ef47458cf5ed294ca22b4de062a48546c9b782.scope: Deactivated successfully.
Oct  9 09:57:10 compute-0 systemd[1]: libpod-4f4c8f6425aa22804acc771783ef47458cf5ed294ca22b4de062a48546c9b782.scope: Consumed 1.102s CPU time.
Oct  9 09:57:10 compute-0 podman[194779]: 2025-10-09 09:57:10.096075777 +0000 UTC m=+0.739210457 container died 4f4c8f6425aa22804acc771783ef47458cf5ed294ca22b4de062a48546c9b782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_visvesvaraya, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:57:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:57:10.109 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:57:10.110 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:57:10.110 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:10 compute-0 lvm[194872]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:57:10 compute-0 lvm[194872]: VG ceph_vg0 finished
Oct  9 09:57:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f83e328c16dd3e719aa873c714f76f6262d7a08ce7a5930ff136c12e83348ba-merged.mount: Deactivated successfully.
Oct  9 09:57:10 compute-0 podman[194779]: 2025-10-09 09:57:10.123637146 +0000 UTC m=+0.766771826 container remove 4f4c8f6425aa22804acc771783ef47458cf5ed294ca22b4de062a48546c9b782 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_visvesvaraya, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:57:10 compute-0 systemd[1]: libpod-conmon-4f4c8f6425aa22804acc771783ef47458cf5ed294ca22b4de062a48546c9b782.scope: Deactivated successfully.
Oct  9 09:57:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:57:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:57:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v720: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct  9 09:57:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:11.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:11 compute-0 nova_compute[187439]: 2025-10-09 09:57:11.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:11.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:12] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 09:57:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:12] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 09:57:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v721: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 18 KiB/s wr, 2 op/s
Oct  9 09:57:12 compute-0 nova_compute[187439]: 2025-10-09 09:57:12.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:13.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:13.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:14 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:57:14.089 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v722: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 18 KiB/s wr, 2 op/s
Oct  9 09:57:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:15.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:15.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v723: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 6.4 KiB/s wr, 2 op/s
Oct  9 09:57:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:16 compute-0 podman[194939]: 2025-10-09 09:57:16.621983422 +0000 UTC m=+0.056322861 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:57:16 compute-0 nova_compute[187439]: 2025-10-09 09:57:16.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:17.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:17.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:17.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:17.064Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:17.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:17 compute-0 nova_compute[187439]: 2025-10-09 09:57:17.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:17.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v724: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Oct  9 09:57:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:18.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:18.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:18.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:18.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:19.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:57:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:57:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:57:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:57:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:57:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:57:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:57:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:57:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:19.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:20 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  9 09:57:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v725: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Oct  9 09:57:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:21.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:21 compute-0 nova_compute[187439]: 2025-10-09 09:57:21.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:21.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:22] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:57:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:22] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:57:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v726: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 9.8 KiB/s wr, 30 op/s
Oct  9 09:57:22 compute-0 nova_compute[187439]: 2025-10-09 09:57:22.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:22 compute-0 podman[194963]: 2025-10-09 09:57:22.610131631 +0000 UTC m=+0.048775801 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  9 09:57:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:23.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:23.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v727: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct  9 09:57:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:25.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:25 compute-0 podman[194981]: 2025-10-09 09:57:25.617866395 +0000 UTC m=+0.057605760 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  9 09:57:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:25.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v728: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct  9 09:57:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:26 compute-0 nova_compute[187439]: 2025-10-09 09:57:26.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:27.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:27.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:27.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:27.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:27.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:27 compute-0 nova_compute[187439]: 2025-10-09 09:57:27.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:27.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct  9 09:57:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:28.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:28.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:28.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:28.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:29.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct  9 09:57:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:31.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:31 compute-0 nova_compute[187439]: 2025-10-09 09:57:31.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:32] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:57:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:32] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:57:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v731: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct  9 09:57:32 compute-0 nova_compute[187439]: 2025-10-09 09:57:32.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:33.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v732: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:57:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:57:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:57:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.357191) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855357226, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2157, "num_deletes": 251, "total_data_size": 4271884, "memory_usage": 4337240, "flush_reason": "Manual Compaction"}
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct  9 09:57:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:35.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855369081, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4153932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19880, "largest_seqno": 22036, "table_properties": {"data_size": 4144126, "index_size": 6236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20058, "raw_average_key_size": 20, "raw_value_size": 4124479, "raw_average_value_size": 4187, "num_data_blocks": 272, "num_entries": 985, "num_filter_entries": 985, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003650, "oldest_key_time": 1760003650, "file_creation_time": 1760003855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 12022 microseconds, and 8731 cpu microseconds.
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.369207) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4153932 bytes OK
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.369261) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.369650) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.369662) EVENT_LOG_v1 {"time_micros": 1760003855369658, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.369678) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4263146, prev total WAL file size 4263146, number of live WAL files 2.
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.370815) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(4056KB)], [44(12MB)]
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855371051, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 16775167, "oldest_snapshot_seqno": -1}
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5399 keys, 14600795 bytes, temperature: kUnknown
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855418035, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 14600795, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14562445, "index_size": 23776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 136125, "raw_average_key_size": 25, "raw_value_size": 14462045, "raw_average_value_size": 2678, "num_data_blocks": 981, "num_entries": 5399, "num_filter_entries": 5399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760003855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418376) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14600795 bytes
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 355.4 rd, 309.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.0 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 5923, records dropped: 524 output_compression: NoCompression
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418847) EVENT_LOG_v1 {"time_micros": 1760003855418840, "job": 22, "event": "compaction_finished", "compaction_time_micros": 47205, "compaction_time_cpu_micros": 23908, "output_level": 6, "num_output_files": 1, "total_output_size": 14600795, "num_input_records": 5923, "num_output_records": 5399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855419919, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855422111, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.370754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.422289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.422300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.422302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.422303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:57:35.422305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-0 podman[195035]: 2025-10-09 09:57:35.652795445 +0000 UTC m=+0.080036186 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  9 09:57:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:35.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 09:57:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=cleanup t=2025-10-09T09:57:36.39292934Z level=info msg="Completed cleanup jobs" duration=5.067154ms
Oct  9 09:57:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=grafana.update.checker t=2025-10-09T09:57:36.489657529Z level=info msg="Update check succeeded" duration=45.274938ms
Oct  9 09:57:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=plugins.update.checker t=2025-10-09T09:57:36.519648899Z level=info msg="Update check succeeded" duration=70.480536ms
Oct  9 09:57:36 compute-0 nova_compute[187439]: 2025-10-09 09:57:36.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:37.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:37.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:37.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:37.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:37.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:37 compute-0 nova_compute[187439]: 2025-10-09 09:57:37.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:37.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:57:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:38.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:38.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:38.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:38.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:39.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:39.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:57:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:41.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:41 compute-0 nova_compute[187439]: 2025-10-09 09:57:41.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:41.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:42] "GET /metrics HTTP/1.1" 200 48521 "" "Prometheus/2.51.0"
Oct  9 09:57:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:42] "GET /metrics HTTP/1.1" 200 48521 "" "Prometheus/2.51.0"
Oct  9 09:57:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v736: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 09:57:42 compute-0 nova_compute[187439]: 2025-10-09 09:57:42.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000021s ======
Oct  9 09:57:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:43.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct  9 09:57:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:43.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v737: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 09:57:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:45.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:45.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v738: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 09:57:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:46 compute-0 nova_compute[187439]: 2025-10-09 09:57:46.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:47.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:47.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:47.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:47.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:47.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:47 compute-0 nova_compute[187439]: 2025-10-09 09:57:47.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:47 compute-0 podman[195070]: 2025-10-09 09:57:47.606759535 +0000 UTC m=+0.048038261 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  9 09:57:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:47.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v739: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 09:57:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:48.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:48.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:48.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:48.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:49.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:57:49
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['vms', 'backups', '.mgr', '.rgw.root', '.nfs', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:57:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:57:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:57:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:57:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:49.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v740: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 09:57:51 compute-0 nova_compute[187439]: 2025-10-09 09:57:51.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:51 compute-0 nova_compute[187439]: 2025-10-09 09:57:51.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:51.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:51 compute-0 nova_compute[187439]: 2025-10-09 09:57:51.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:51.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.270 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.270 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.270 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.270 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.271 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:52] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Oct  9 09:57:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:57:52] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Oct  9 09:57:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v741: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:57:52 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044383655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.645 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.910 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.911 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4777MB free_disk=59.96738052368164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.912 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.912 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.966 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.967 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:57:52 compute-0 nova_compute[187439]: 2025-10-09 09:57:52.980 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:57:53 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402390666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:57:53 compute-0 nova_compute[187439]: 2025-10-09 09:57:53.345 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:53 compute-0 nova_compute[187439]: 2025-10-09 09:57:53.349 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:57:53 compute-0 nova_compute[187439]: 2025-10-09 09:57:53.361 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:57:53 compute-0 nova_compute[187439]: 2025-10-09 09:57:53.362 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:57:53 compute-0 nova_compute[187439]: 2025-10-09 09:57:53.362 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:53.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:53 compute-0 podman[195163]: 2025-10-09 09:57:53.620206066 +0000 UTC m=+0.058679996 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  9 09:57:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:53.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v742: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct  9 09:57:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:55 compute-0 nova_compute[187439]: 2025-10-09 09:57:55.362 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:55 compute-0 nova_compute[187439]: 2025-10-09 09:57:55.362 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:55 compute-0 nova_compute[187439]: 2025-10-09 09:57:55.363 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:57:55 compute-0 nova_compute[187439]: 2025-10-09 09:57:55.363 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:57:55 compute-0 nova_compute[187439]: 2025-10-09 09:57:55.379 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:57:55 compute-0 nova_compute[187439]: 2025-10-09 09:57:55.379 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:55 compute-0 nova_compute[187439]: 2025-10-09 09:57:55.379 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:55 compute-0 nova_compute[187439]: 2025-10-09 09:57:55.380 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:55 compute-0 nova_compute[187439]: 2025-10-09 09:57:55.380 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:57:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:55.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:55.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v743: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct  9 09:57:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:56 compute-0 podman[195183]: 2025-10-09 09:57:56.618323756 +0000 UTC m=+0.056777189 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct  9 09:57:56 compute-0 nova_compute[187439]: 2025-10-09 09:57:56.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:57.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:57.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:57.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:57.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:57.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:57 compute-0 nova_compute[187439]: 2025-10-09 09:57:57.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:57.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v744: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  9 09:57:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:58.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:58.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:58.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:57:58.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:57:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:57:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:59.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:57:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:59.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:57:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v745: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  9 09:58:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:01.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:01 compute-0 nova_compute[187439]: 2025-10-09 09:58:01.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:01.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:02] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Oct  9 09:58:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:02] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Oct  9 09:58:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v746: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct  9 09:58:02 compute-0 nova_compute[187439]: 2025-10-09 09:58:02.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:03.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:03.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v747: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct  9 09:58:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:58:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:58:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:05.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:05.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v748: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct  9 09:58:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:06 compute-0 podman[195210]: 2025-10-09 09:58:06.63983404 +0000 UTC m=+0.077885954 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:58:06 compute-0 nova_compute[187439]: 2025-10-09 09:58:06.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:07.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:07.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:07.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:07.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:07.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:07 compute-0 nova_compute[187439]: 2025-10-09 09:58:07.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:07.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v749: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  9 09:58:08 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:08.860 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:58:08 compute-0 nova_compute[187439]: 2025-10-09 09:58:08.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:08 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:08.862 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:58:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:08.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:08.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:08.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:08.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:09.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:09.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:10.111 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:10.112 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:10.112 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v750: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  9 09:58:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 09:58:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  9 09:58:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  9 09:58:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:58:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:58:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:58:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v751: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct  9 09:58:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:58:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:58:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:58:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:58:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:11.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:11 compute-0 podman[195399]: 2025-10-09 09:58:11.483564621 +0000 UTC m=+0.041312757 container create 5db3fb52216a07b4f8b0cc1ba09348df23a482350458faf1ece3a41ec62ecb5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jemison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 09:58:11 compute-0 systemd[1]: Started libpod-conmon-5db3fb52216a07b4f8b0cc1ba09348df23a482350458faf1ece3a41ec62ecb5e.scope.
Oct  9 09:58:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:11 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:58:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:58:11 compute-0 podman[195399]: 2025-10-09 09:58:11.555585152 +0000 UTC m=+0.113333288 container init 5db3fb52216a07b4f8b0cc1ba09348df23a482350458faf1ece3a41ec62ecb5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:58:11 compute-0 podman[195399]: 2025-10-09 09:58:11.462738994 +0000 UTC m=+0.020487150 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:58:11 compute-0 podman[195399]: 2025-10-09 09:58:11.562027511 +0000 UTC m=+0.119775636 container start 5db3fb52216a07b4f8b0cc1ba09348df23a482350458faf1ece3a41ec62ecb5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:58:11 compute-0 podman[195399]: 2025-10-09 09:58:11.563184309 +0000 UTC m=+0.120932446 container attach 5db3fb52216a07b4f8b0cc1ba09348df23a482350458faf1ece3a41ec62ecb5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jemison, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:58:11 compute-0 serene_jemison[195412]: 167 167
Oct  9 09:58:11 compute-0 systemd[1]: libpod-5db3fb52216a07b4f8b0cc1ba09348df23a482350458faf1ece3a41ec62ecb5e.scope: Deactivated successfully.
Oct  9 09:58:11 compute-0 podman[195399]: 2025-10-09 09:58:11.567615589 +0000 UTC m=+0.125363855 container died 5db3fb52216a07b4f8b0cc1ba09348df23a482350458faf1ece3a41ec62ecb5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebeff062fa612d35f7559f5bd48ebc41e88a5d125060e773818b53c45a51ccb4-merged.mount: Deactivated successfully.
Oct  9 09:58:11 compute-0 podman[195399]: 2025-10-09 09:58:11.590670936 +0000 UTC m=+0.148419072 container remove 5db3fb52216a07b4f8b0cc1ba09348df23a482350458faf1ece3a41ec62ecb5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:58:11 compute-0 systemd[1]: libpod-conmon-5db3fb52216a07b4f8b0cc1ba09348df23a482350458faf1ece3a41ec62ecb5e.scope: Deactivated successfully.
Oct  9 09:58:11 compute-0 nova_compute[187439]: 2025-10-09 09:58:11.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:11 compute-0 podman[195434]: 2025-10-09 09:58:11.752778679 +0000 UTC m=+0.040500297 container create 90fcf5fb653214ee8b48d311ccf0093ba890425b9b42d928fc2b8977d4aec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:58:11 compute-0 systemd[1]: Started libpod-conmon-90fcf5fb653214ee8b48d311ccf0093ba890425b9b42d928fc2b8977d4aec2c8.scope.
Oct  9 09:58:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 09:58:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2643942600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 09:58:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 09:58:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2643942600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 09:58:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15de3dbc389caa1518cc621a9a47b8db3d62314326b9cfcae6a4ec86e41d675/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15de3dbc389caa1518cc621a9a47b8db3d62314326b9cfcae6a4ec86e41d675/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15de3dbc389caa1518cc621a9a47b8db3d62314326b9cfcae6a4ec86e41d675/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15de3dbc389caa1518cc621a9a47b8db3d62314326b9cfcae6a4ec86e41d675/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15de3dbc389caa1518cc621a9a47b8db3d62314326b9cfcae6a4ec86e41d675/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:11 compute-0 podman[195434]: 2025-10-09 09:58:11.824772429 +0000 UTC m=+0.112494057 container init 90fcf5fb653214ee8b48d311ccf0093ba890425b9b42d928fc2b8977d4aec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:58:11 compute-0 podman[195434]: 2025-10-09 09:58:11.738407824 +0000 UTC m=+0.026129462 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:58:11 compute-0 podman[195434]: 2025-10-09 09:58:11.833861724 +0000 UTC m=+0.121583343 container start 90fcf5fb653214ee8b48d311ccf0093ba890425b9b42d928fc2b8977d4aec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wilson, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:58:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:11.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:11 compute-0 podman[195434]: 2025-10-09 09:58:11.836319425 +0000 UTC m=+0.124041043 container attach 90fcf5fb653214ee8b48d311ccf0093ba890425b9b42d928fc2b8977d4aec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:58:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:11.864 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:12 compute-0 bold_wilson[195447]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:58:12 compute-0 bold_wilson[195447]: --> All data devices are unavailable
Oct  9 09:58:12 compute-0 podman[195434]: 2025-10-09 09:58:12.136831561 +0000 UTC m=+0.424553179 container died 90fcf5fb653214ee8b48d311ccf0093ba890425b9b42d928fc2b8977d4aec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:58:12 compute-0 systemd[1]: libpod-90fcf5fb653214ee8b48d311ccf0093ba890425b9b42d928fc2b8977d4aec2c8.scope: Deactivated successfully.
Oct  9 09:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c15de3dbc389caa1518cc621a9a47b8db3d62314326b9cfcae6a4ec86e41d675-merged.mount: Deactivated successfully.
Oct  9 09:58:12 compute-0 podman[195434]: 2025-10-09 09:58:12.159954395 +0000 UTC m=+0.447676013 container remove 90fcf5fb653214ee8b48d311ccf0093ba890425b9b42d928fc2b8977d4aec2c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:58:12 compute-0 systemd[1]: libpod-conmon-90fcf5fb653214ee8b48d311ccf0093ba890425b9b42d928fc2b8977d4aec2c8.scope: Deactivated successfully.
Oct  9 09:58:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:12] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Oct  9 09:58:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:12] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Oct  9 09:58:12 compute-0 nova_compute[187439]: 2025-10-09 09:58:12.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:12 compute-0 podman[195580]: 2025-10-09 09:58:12.693099307 +0000 UTC m=+0.040269271 container create 6c69296060fd0f782b21e9e2fb82985c6a46038165fbc3acc61beeca061cd637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 09:58:12 compute-0 systemd[1]: Started libpod-conmon-6c69296060fd0f782b21e9e2fb82985c6a46038165fbc3acc61beeca061cd637.scope.
Oct  9 09:58:12 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:58:12 compute-0 podman[195580]: 2025-10-09 09:58:12.758726083 +0000 UTC m=+0.105896037 container init 6c69296060fd0f782b21e9e2fb82985c6a46038165fbc3acc61beeca061cd637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:58:12 compute-0 podman[195580]: 2025-10-09 09:58:12.764822709 +0000 UTC m=+0.111992663 container start 6c69296060fd0f782b21e9e2fb82985c6a46038165fbc3acc61beeca061cd637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 09:58:12 compute-0 podman[195580]: 2025-10-09 09:58:12.766101819 +0000 UTC m=+0.113271793 container attach 6c69296060fd0f782b21e9e2fb82985c6a46038165fbc3acc61beeca061cd637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 09:58:12 compute-0 compassionate_liskov[195593]: 167 167
Oct  9 09:58:12 compute-0 systemd[1]: libpod-6c69296060fd0f782b21e9e2fb82985c6a46038165fbc3acc61beeca061cd637.scope: Deactivated successfully.
Oct  9 09:58:12 compute-0 conmon[195593]: conmon 6c69296060fd0f782b21 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c69296060fd0f782b21e9e2fb82985c6a46038165fbc3acc61beeca061cd637.scope/container/memory.events
Oct  9 09:58:12 compute-0 podman[195580]: 2025-10-09 09:58:12.770864483 +0000 UTC m=+0.118034437 container died 6c69296060fd0f782b21e9e2fb82985c6a46038165fbc3acc61beeca061cd637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 09:58:12 compute-0 podman[195580]: 2025-10-09 09:58:12.6753259 +0000 UTC m=+0.022495875 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b638e617901f36e4d1db925565d7f3b254f52777b8b2a10acccd01d34cfa7f0-merged.mount: Deactivated successfully.
Oct  9 09:58:12 compute-0 podman[195580]: 2025-10-09 09:58:12.7982504 +0000 UTC m=+0.145420364 container remove 6c69296060fd0f782b21e9e2fb82985c6a46038165fbc3acc61beeca061cd637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Oct  9 09:58:12 compute-0 systemd[1]: libpod-conmon-6c69296060fd0f782b21e9e2fb82985c6a46038165fbc3acc61beeca061cd637.scope: Deactivated successfully.
Oct  9 09:58:12 compute-0 podman[195616]: 2025-10-09 09:58:12.946517885 +0000 UTC m=+0.040469018 container create 696b018c4d5b28ca970cc2996780609a30339b4b9299c84a5f73b4c514aedc68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:58:12 compute-0 systemd[1]: Started libpod-conmon-696b018c4d5b28ca970cc2996780609a30339b4b9299c84a5f73b4c514aedc68.scope.
Oct  9 09:58:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v752: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 121 KiB/s wr, 26 op/s
Oct  9 09:58:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0903aa0dd607d557d9ff07d154017a8861c25738606a0468adaf56c0efcdcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0903aa0dd607d557d9ff07d154017a8861c25738606a0468adaf56c0efcdcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0903aa0dd607d557d9ff07d154017a8861c25738606a0468adaf56c0efcdcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0903aa0dd607d557d9ff07d154017a8861c25738606a0468adaf56c0efcdcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:13 compute-0 podman[195616]: 2025-10-09 09:58:13.025293073 +0000 UTC m=+0.119244225 container init 696b018c4d5b28ca970cc2996780609a30339b4b9299c84a5f73b4c514aedc68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:58:13 compute-0 podman[195616]: 2025-10-09 09:58:12.932204658 +0000 UTC m=+0.026155810 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:58:13 compute-0 podman[195616]: 2025-10-09 09:58:13.032369435 +0000 UTC m=+0.126320577 container start 696b018c4d5b28ca970cc2996780609a30339b4b9299c84a5f73b4c514aedc68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:58:13 compute-0 podman[195616]: 2025-10-09 09:58:13.033907073 +0000 UTC m=+0.127858225 container attach 696b018c4d5b28ca970cc2996780609a30339b4b9299c84a5f73b4c514aedc68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_joliot, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]: {
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:    "1": [
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:        {
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "devices": [
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "/dev/loop3"
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            ],
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "lv_name": "ceph_lv0",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "lv_size": "21470642176",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "name": "ceph_lv0",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "tags": {
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.cluster_name": "ceph",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.crush_device_class": "",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.encrypted": "0",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.osd_id": "1",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.type": "block",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.vdo": "0",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:                "ceph.with_tpm": "0"
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            },
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "type": "block",
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:            "vg_name": "ceph_vg0"
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:        }
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]:    ]
Oct  9 09:58:13 compute-0 nostalgic_joliot[195629]: }
Oct  9 09:58:13 compute-0 systemd[1]: libpod-696b018c4d5b28ca970cc2996780609a30339b4b9299c84a5f73b4c514aedc68.scope: Deactivated successfully.
Oct  9 09:58:13 compute-0 conmon[195629]: conmon 696b018c4d5b28ca970c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-696b018c4d5b28ca970cc2996780609a30339b4b9299c84a5f73b4c514aedc68.scope/container/memory.events
Oct  9 09:58:13 compute-0 podman[195616]: 2025-10-09 09:58:13.285233207 +0000 UTC m=+0.379184340 container died 696b018c4d5b28ca970cc2996780609a30339b4b9299c84a5f73b4c514aedc68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_joliot, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 09:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0903aa0dd607d557d9ff07d154017a8861c25738606a0468adaf56c0efcdcb-merged.mount: Deactivated successfully.
Oct  9 09:58:13 compute-0 podman[195616]: 2025-10-09 09:58:13.314038669 +0000 UTC m=+0.407989801 container remove 696b018c4d5b28ca970cc2996780609a30339b4b9299c84a5f73b4c514aedc68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 09:58:13 compute-0 systemd[1]: libpod-conmon-696b018c4d5b28ca970cc2996780609a30339b4b9299c84a5f73b4c514aedc68.scope: Deactivated successfully.
Oct  9 09:58:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:13.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:13.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:13 compute-0 podman[195727]: 2025-10-09 09:58:13.857966897 +0000 UTC m=+0.042184930 container create 6a1b371b6a17f390650088a8f2b4f35cf54ea708561ea08b242fb01c78deda4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_leakey, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:58:13 compute-0 systemd[1]: Started libpod-conmon-6a1b371b6a17f390650088a8f2b4f35cf54ea708561ea08b242fb01c78deda4a.scope.
Oct  9 09:58:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:58:13 compute-0 podman[195727]: 2025-10-09 09:58:13.921026756 +0000 UTC m=+0.105244799 container init 6a1b371b6a17f390650088a8f2b4f35cf54ea708561ea08b242fb01c78deda4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_leakey, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:58:13 compute-0 podman[195727]: 2025-10-09 09:58:13.928340206 +0000 UTC m=+0.112558239 container start 6a1b371b6a17f390650088a8f2b4f35cf54ea708561ea08b242fb01c78deda4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_leakey, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:58:13 compute-0 podman[195727]: 2025-10-09 09:58:13.929740373 +0000 UTC m=+0.113958426 container attach 6a1b371b6a17f390650088a8f2b4f35cf54ea708561ea08b242fb01c78deda4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:58:13 compute-0 elegant_leakey[195740]: 167 167
Oct  9 09:58:13 compute-0 systemd[1]: libpod-6a1b371b6a17f390650088a8f2b4f35cf54ea708561ea08b242fb01c78deda4a.scope: Deactivated successfully.
Oct  9 09:58:13 compute-0 conmon[195740]: conmon 6a1b371b6a17f3906500 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a1b371b6a17f390650088a8f2b4f35cf54ea708561ea08b242fb01c78deda4a.scope/container/memory.events
Oct  9 09:58:13 compute-0 podman[195727]: 2025-10-09 09:58:13.935584865 +0000 UTC m=+0.119802898 container died 6a1b371b6a17f390650088a8f2b4f35cf54ea708561ea08b242fb01c78deda4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:58:13 compute-0 podman[195727]: 2025-10-09 09:58:13.842072732 +0000 UTC m=+0.026290785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7cdddd2d1260c14c506487945ae75eec617eca4d22c1665be3366e4ef37302f-merged.mount: Deactivated successfully.
Oct  9 09:58:13 compute-0 podman[195727]: 2025-10-09 09:58:13.961002654 +0000 UTC m=+0.145220686 container remove 6a1b371b6a17f390650088a8f2b4f35cf54ea708561ea08b242fb01c78deda4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:58:13 compute-0 systemd[1]: libpod-conmon-6a1b371b6a17f390650088a8f2b4f35cf54ea708561ea08b242fb01c78deda4a.scope: Deactivated successfully.
Oct  9 09:58:14 compute-0 podman[195762]: 2025-10-09 09:58:14.108970575 +0000 UTC m=+0.038921901 container create cc3f9deb5707f6c35ccb6af01579276a0d952bdb36ad6bc06b88846bfd1f0e1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:58:14 compute-0 systemd[1]: Started libpod-conmon-cc3f9deb5707f6c35ccb6af01579276a0d952bdb36ad6bc06b88846bfd1f0e1b.scope.
Oct  9 09:58:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8023cfbf22f7a0db29ef29bc7e27d9e201145f961cc8437346dcda1989b987c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8023cfbf22f7a0db29ef29bc7e27d9e201145f961cc8437346dcda1989b987c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8023cfbf22f7a0db29ef29bc7e27d9e201145f961cc8437346dcda1989b987c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8023cfbf22f7a0db29ef29bc7e27d9e201145f961cc8437346dcda1989b987c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:14 compute-0 podman[195762]: 2025-10-09 09:58:14.183026443 +0000 UTC m=+0.112977779 container init cc3f9deb5707f6c35ccb6af01579276a0d952bdb36ad6bc06b88846bfd1f0e1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jemison, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 09:58:14 compute-0 podman[195762]: 2025-10-09 09:58:14.093786237 +0000 UTC m=+0.023737583 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:58:14 compute-0 podman[195762]: 2025-10-09 09:58:14.189471494 +0000 UTC m=+0.119422820 container start cc3f9deb5707f6c35ccb6af01579276a0d952bdb36ad6bc06b88846bfd1f0e1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jemison, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  9 09:58:14 compute-0 podman[195762]: 2025-10-09 09:58:14.191680907 +0000 UTC m=+0.121632233 container attach cc3f9deb5707f6c35ccb6af01579276a0d952bdb36ad6bc06b88846bfd1f0e1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jemison, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:58:14 compute-0 dazzling_jemison[195776]: {}
Oct  9 09:58:14 compute-0 lvm[195854]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:58:14 compute-0 lvm[195854]: VG ceph_vg0 finished
Oct  9 09:58:14 compute-0 systemd[1]: libpod-cc3f9deb5707f6c35ccb6af01579276a0d952bdb36ad6bc06b88846bfd1f0e1b.scope: Deactivated successfully.
Oct  9 09:58:14 compute-0 systemd[1]: libpod-cc3f9deb5707f6c35ccb6af01579276a0d952bdb36ad6bc06b88846bfd1f0e1b.scope: Consumed 1.002s CPU time.
Oct  9 09:58:14 compute-0 podman[195855]: 2025-10-09 09:58:14.831745502 +0000 UTC m=+0.021989570 container died cc3f9deb5707f6c35ccb6af01579276a0d952bdb36ad6bc06b88846bfd1f0e1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 09:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-8023cfbf22f7a0db29ef29bc7e27d9e201145f961cc8437346dcda1989b987c2-merged.mount: Deactivated successfully.
Oct  9 09:58:14 compute-0 podman[195855]: 2025-10-09 09:58:14.860866379 +0000 UTC m=+0.051110445 container remove cc3f9deb5707f6c35ccb6af01579276a0d952bdb36ad6bc06b88846bfd1f0e1b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:58:14 compute-0 systemd[1]: libpod-conmon-cc3f9deb5707f6c35ccb6af01579276a0d952bdb36ad6bc06b88846bfd1f0e1b.scope: Deactivated successfully.
Oct  9 09:58:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:58:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:58:14 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v753: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 121 KiB/s wr, 26 op/s
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.092 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "3ffc41de-d07a-40ee-a277-623db113eda1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.092 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.121 2 DEBUG nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.172 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.173 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.179 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.179 2 INFO nova.compute.claims [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.243 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:15.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:58:15 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2262883313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.614 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.620 2 DEBUG nova.compute.provider_tree [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.641 2 DEBUG nova.scheduler.client.report [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.658 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.659 2 DEBUG nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.694 2 DEBUG nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.694 2 DEBUG nova.network.neutron [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.712 2 INFO nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.725 2 DEBUG nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.796 2 DEBUG nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.798 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.798 2 INFO nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Creating image(s)#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.821 2 DEBUG nova.storage.rbd_utils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 3ffc41de-d07a-40ee-a277-623db113eda1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:58:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:15.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.842 2 DEBUG nova.storage.rbd_utils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 3ffc41de-d07a-40ee-a277-623db113eda1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.861 2 DEBUG nova.storage.rbd_utils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 3ffc41de-d07a-40ee-a277-623db113eda1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.864 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.886 2 DEBUG nova.policy [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 09:58:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:15 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.924 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.925 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.926 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.926 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.945 2 DEBUG nova.storage.rbd_utils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 3ffc41de-d07a-40ee-a277-623db113eda1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:58:15 compute-0 nova_compute[187439]: 2025-10-09 09:58:15.949 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 3ffc41de-d07a-40ee-a277-623db113eda1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:16 compute-0 nova_compute[187439]: 2025-10-09 09:58:16.122 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 3ffc41de-d07a-40ee-a277-623db113eda1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:16 compute-0 nova_compute[187439]: 2025-10-09 09:58:16.174 2 DEBUG nova.storage.rbd_utils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image 3ffc41de-d07a-40ee-a277-623db113eda1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 09:58:16 compute-0 nova_compute[187439]: 2025-10-09 09:58:16.237 2 DEBUG nova.objects.instance [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ffc41de-d07a-40ee-a277-623db113eda1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:58:16 compute-0 nova_compute[187439]: 2025-10-09 09:58:16.247 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 09:58:16 compute-0 nova_compute[187439]: 2025-10-09 09:58:16.248 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Ensure instance console log exists: /var/lib/nova/instances/3ffc41de-d07a-40ee-a277-623db113eda1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 09:58:16 compute-0 nova_compute[187439]: 2025-10-09 09:58:16.248 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:16 compute-0 nova_compute[187439]: 2025-10-09 09:58:16.248 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:16 compute-0 nova_compute[187439]: 2025-10-09 09:58:16.249 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:16 compute-0 nova_compute[187439]: 2025-10-09 09:58:16.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v754: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 17 KiB/s wr, 2 op/s
Oct  9 09:58:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:17.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:17.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:17.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:17.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:17.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:17 compute-0 nova_compute[187439]: 2025-10-09 09:58:17.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:17.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:18 compute-0 nova_compute[187439]: 2025-10-09 09:58:18.455 2 DEBUG nova.network.neutron [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Successfully created port: eb8548dc-6635-4371-9e8f-c5b635941d12 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 09:58:18 compute-0 podman[196083]: 2025-10-09 09:58:18.62647947 +0000 UTC m=+0.053319889 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:58:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:18.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:18.900Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:18.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:18.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v755: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 17 KiB/s wr, 2 op/s
Oct  9 09:58:19 compute-0 nova_compute[187439]: 2025-10-09 09:58:19.047 2 DEBUG nova.network.neutron [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Successfully updated port: eb8548dc-6635-4371-9e8f-c5b635941d12 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 09:58:19 compute-0 nova_compute[187439]: 2025-10-09 09:58:19.066 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-3ffc41de-d07a-40ee-a277-623db113eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:58:19 compute-0 nova_compute[187439]: 2025-10-09 09:58:19.067 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-3ffc41de-d07a-40ee-a277-623db113eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:58:19 compute-0 nova_compute[187439]: 2025-10-09 09:58:19.067 2 DEBUG nova.network.neutron [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 09:58:19 compute-0 nova_compute[187439]: 2025-10-09 09:58:19.104 2 DEBUG nova.compute.manager [req-748a2bfe-4027-42a2-8bcd-d1434611c0ab req-a0acfaa9-d34e-4353-bb03-a65bc0b73189 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Received event network-changed-eb8548dc-6635-4371-9e8f-c5b635941d12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:19 compute-0 nova_compute[187439]: 2025-10-09 09:58:19.104 2 DEBUG nova.compute.manager [req-748a2bfe-4027-42a2-8bcd-d1434611c0ab req-a0acfaa9-d34e-4353-bb03-a65bc0b73189 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Refreshing instance network info cache due to event network-changed-eb8548dc-6635-4371-9e8f-c5b635941d12. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:58:19 compute-0 nova_compute[187439]: 2025-10-09 09:58:19.104 2 DEBUG oslo_concurrency.lockutils [req-748a2bfe-4027-42a2-8bcd-d1434611c0ab req-a0acfaa9-d34e-4353-bb03-a65bc0b73189 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-3ffc41de-d07a-40ee-a277-623db113eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:58:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:19.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:58:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:58:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:58:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:58:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:58:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:58:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:58:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:58:19 compute-0 nova_compute[187439]: 2025-10-09 09:58:19.756 2 DEBUG nova.network.neutron [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 09:58:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:19.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v756: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.026 2 DEBUG nova.network.neutron [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Updating instance_info_cache with network_info: [{"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.040 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-3ffc41de-d07a-40ee-a277-623db113eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.040 2 DEBUG nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Instance network_info: |[{"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.040 2 DEBUG oslo_concurrency.lockutils [req-748a2bfe-4027-42a2-8bcd-d1434611c0ab req-a0acfaa9-d34e-4353-bb03-a65bc0b73189 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-3ffc41de-d07a-40ee-a277-623db113eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.040 2 DEBUG nova.network.neutron [req-748a2bfe-4027-42a2-8bcd-d1434611c0ab req-a0acfaa9-d34e-4353-bb03-a65bc0b73189 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Refreshing network info cache for port eb8548dc-6635-4371-9e8f-c5b635941d12 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.042 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Start _get_guest_xml network_info=[{"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'encryption_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'guest_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.049 2 WARNING nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.053 2 DEBUG nova.virt.libvirt.host [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.053 2 DEBUG nova.virt.libvirt.host [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.056 2 DEBUG nova.virt.libvirt.host [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.056 2 DEBUG nova.virt.libvirt.host [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.057 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.057 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.057 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.057 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.058 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.058 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.058 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.058 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.058 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.058 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.059 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.059 2 DEBUG nova.virt.hardware [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.061 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:21.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:58:21 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004045298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.449 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.472 2 DEBUG nova.storage.rbd_utils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 3ffc41de-d07a-40ee-a277-623db113eda1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.475 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:58:21 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3984953572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.837 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.839 2 DEBUG nova.virt.libvirt.vif [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:58:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-172706500',display_name='tempest-TestNetworkBasicOps-server-172706500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-172706500',id=5,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDJSTqHJWrIfo1mN4bEVd0fVRgjxQS25gZjKH3NGwwQ9zKdcgq9+vWhuZvoPJGs0R+tT7AFviVN5gsk0ZZjp6J4sC0r1KbTYRYWw3Ckg2zIuat+ZsSbwAmmmI+FmlZx13w==',key_name='tempest-TestNetworkBasicOps-233590586',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mapx1vd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:58:15Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=3ffc41de-d07a-40ee-a277-623db113eda1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.840 2 DEBUG nova.network.os_vif_util [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.841 2 DEBUG nova.network.os_vif_util [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=eb8548dc-6635-4371-9e8f-c5b635941d12,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8548dc-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.842 2 DEBUG nova.objects.instance [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ffc41de-d07a-40ee-a277-623db113eda1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:58:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:21.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.854 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] End _get_guest_xml xml=<domain type="kvm">
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <uuid>3ffc41de-d07a-40ee-a277-623db113eda1</uuid>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <name>instance-00000005</name>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <memory>131072</memory>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <vcpu>1</vcpu>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <metadata>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <nova:name>tempest-TestNetworkBasicOps-server-172706500</nova:name>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <nova:creationTime>2025-10-09 09:58:21</nova:creationTime>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <nova:flavor name="m1.nano">
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <nova:memory>128</nova:memory>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <nova:disk>1</nova:disk>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <nova:swap>0</nova:swap>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <nova:vcpus>1</nova:vcpus>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      </nova:flavor>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <nova:owner>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      </nova:owner>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <nova:ports>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <nova:port uuid="eb8548dc-6635-4371-9e8f-c5b635941d12">
Oct  9 09:58:21 compute-0 nova_compute[187439]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        </nova:port>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      </nova:ports>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    </nova:instance>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  </metadata>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <sysinfo type="smbios">
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <system>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <entry name="manufacturer">RDO</entry>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <entry name="product">OpenStack Compute</entry>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <entry name="serial">3ffc41de-d07a-40ee-a277-623db113eda1</entry>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <entry name="uuid">3ffc41de-d07a-40ee-a277-623db113eda1</entry>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <entry name="family">Virtual Machine</entry>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    </system>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  </sysinfo>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <os>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <boot dev="hd"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <smbios mode="sysinfo"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  </os>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <features>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <acpi/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <apic/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <vmcoreinfo/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  </features>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <clock offset="utc">
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <timer name="hpet" present="no"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  </clock>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <cpu mode="host-model" match="exact">
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  </cpu>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  <devices>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <disk type="network" device="disk">
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <driver type="raw" cache="none"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <source protocol="rbd" name="vms/3ffc41de-d07a-40ee-a277-623db113eda1_disk">
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      </source>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <auth username="openstack">
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      </auth>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <target dev="vda" bus="virtio"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    </disk>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <disk type="network" device="cdrom">
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <driver type="raw" cache="none"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <source protocol="rbd" name="vms/3ffc41de-d07a-40ee-a277-623db113eda1_disk.config">
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      </source>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <auth username="openstack">
Oct  9 09:58:21 compute-0 nova_compute[187439]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      </auth>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <target dev="sda" bus="sata"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    </disk>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <interface type="ethernet">
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <mac address="fa:16:3e:53:d9:2e"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <model type="virtio"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <mtu size="1442"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <target dev="tapeb8548dc-66"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    </interface>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <serial type="pty">
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <log file="/var/lib/nova/instances/3ffc41de-d07a-40ee-a277-623db113eda1/console.log" append="off"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    </serial>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <video>
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <model type="virtio"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    </video>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <input type="tablet" bus="usb"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <rng model="virtio">
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <backend model="random">/dev/urandom</backend>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    </rng>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <controller type="usb" index="0"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    <memballoon model="virtio">
Oct  9 09:58:21 compute-0 nova_compute[187439]:      <stats period="10"/>
Oct  9 09:58:21 compute-0 nova_compute[187439]:    </memballoon>
Oct  9 09:58:21 compute-0 nova_compute[187439]:  </devices>
Oct  9 09:58:21 compute-0 nova_compute[187439]: </domain>
Oct  9 09:58:21 compute-0 nova_compute[187439]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.857 2 DEBUG nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Preparing to wait for external event network-vif-plugged-eb8548dc-6635-4371-9e8f-c5b635941d12 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.858 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.858 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.858 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.859 2 DEBUG nova.virt.libvirt.vif [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:58:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-172706500',display_name='tempest-TestNetworkBasicOps-server-172706500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-172706500',id=5,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDJSTqHJWrIfo1mN4bEVd0fVRgjxQS25gZjKH3NGwwQ9zKdcgq9+vWhuZvoPJGs0R+tT7AFviVN5gsk0ZZjp6J4sC0r1KbTYRYWw3Ckg2zIuat+ZsSbwAmmmI+FmlZx13w==',key_name='tempest-TestNetworkBasicOps-233590586',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mapx1vd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:58:15Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=3ffc41de-d07a-40ee-a277-623db113eda1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.859 2 DEBUG nova.network.os_vif_util [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.860 2 DEBUG nova.network.os_vif_util [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=eb8548dc-6635-4371-9e8f-c5b635941d12,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8548dc-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.860 2 DEBUG os_vif [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=eb8548dc-6635-4371-9e8f-c5b635941d12,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8548dc-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.862 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.874 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb8548dc-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.874 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb8548dc-66, col_values=(('external_ids', {'iface-id': 'eb8548dc-6635-4371-9e8f-c5b635941d12', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:d9:2e', 'vm-uuid': '3ffc41de-d07a-40ee-a277-623db113eda1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:21 compute-0 NetworkManager[982]: <info>  [1760003901.8765] manager: (tapeb8548dc-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.882 2 INFO os_vif [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=eb8548dc-6635-4371-9e8f-c5b635941d12,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8548dc-66')#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.909 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.910 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.910 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:53:d9:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.910 2 INFO nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Using config drive#033[00m
Oct  9 09:58:21 compute-0 nova_compute[187439]: 2025-10-09 09:58:21.929 2 DEBUG nova.storage.rbd_utils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 3ffc41de-d07a-40ee-a277-623db113eda1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:58:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:22] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 09:58:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:22] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 09:58:22 compute-0 nova_compute[187439]: 2025-10-09 09:58:22.912 2 INFO nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Creating config drive at /var/lib/nova/instances/3ffc41de-d07a-40ee-a277-623db113eda1/disk.config#033[00m
Oct  9 09:58:22 compute-0 nova_compute[187439]: 2025-10-09 09:58:22.918 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ffc41de-d07a-40ee-a277-623db113eda1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_a6pez9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v757: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.046 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ffc41de-d07a-40ee-a277-623db113eda1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc_a6pez9" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.071 2 DEBUG nova.storage.rbd_utils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 3ffc41de-d07a-40ee-a277-623db113eda1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.074 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ffc41de-d07a-40ee-a277-623db113eda1/disk.config 3ffc41de-d07a-40ee-a277-623db113eda1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.169 2 DEBUG oslo_concurrency.processutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ffc41de-d07a-40ee-a277-623db113eda1/disk.config 3ffc41de-d07a-40ee-a277-623db113eda1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.170 2 INFO nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Deleting local config drive /var/lib/nova/instances/3ffc41de-d07a-40ee-a277-623db113eda1/disk.config because it was imported into RBD.#033[00m
Oct  9 09:58:23 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct  9 09:58:23 compute-0 systemd[1]: Started libvirt secret daemon.
Oct  9 09:58:23 compute-0 kernel: tapeb8548dc-66: entered promiscuous mode
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:23 compute-0 ovn_controller[83056]: 2025-10-09T09:58:23Z|00039|binding|INFO|Claiming lport eb8548dc-6635-4371-9e8f-c5b635941d12 for this chassis.
Oct  9 09:58:23 compute-0 ovn_controller[83056]: 2025-10-09T09:58:23Z|00040|binding|INFO|eb8548dc-6635-4371-9e8f-c5b635941d12: Claiming fa:16:3e:53:d9:2e 10.100.0.14
Oct  9 09:58:23 compute-0 NetworkManager[982]: <info>  [1760003903.2613] manager: (tapeb8548dc-66): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:23 compute-0 systemd-udevd[196253]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:58:23 compute-0 NetworkManager[982]: <info>  [1760003903.2934] device (tapeb8548dc-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:58:23 compute-0 NetworkManager[982]: <info>  [1760003903.2942] device (tapeb8548dc-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:23 compute-0 ovn_controller[83056]: 2025-10-09T09:58:23Z|00041|binding|INFO|Setting lport eb8548dc-6635-4371-9e8f-c5b635941d12 ovn-installed in OVS
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:23 compute-0 ovn_controller[83056]: 2025-10-09T09:58:23Z|00042|binding|INFO|Setting lport eb8548dc-6635-4371-9e8f-c5b635941d12 up in Southbound
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.409 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:d9:2e 10.100.0.14'], port_security=['fa:16:3e:53:d9:2e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3ffc41de-d07a-40ee-a277-623db113eda1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cac7953f-e25e-4486-a0fa-c6cbcac2f8ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6496ebe5-cfc3-4a35-b1e6-27021c277fad, chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], logical_port=eb8548dc-6635-4371-9e8f-c5b635941d12) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.411 92053 INFO neutron.agent.ovn.metadata.agent [-] Port eb8548dc-6635-4371-9e8f-c5b635941d12 in datapath 48ce5fca-3386-4b8a-82e2-88fc71a94881 bound to our chassis#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.411 92053 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48ce5fca-3386-4b8a-82e2-88fc71a94881#033[00m
Oct  9 09:58:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:23.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.421 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[89b57d2c-f19a-478b-904c-2cf2f9789a2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.421 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48ce5fca-31 in ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.425 192856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48ce5fca-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.425 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[5faf88ce-6413-4f11-9364-d4f2302c84eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.426 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[53ce9b8e-0be3-4c5b-846c-20a1988af4aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 systemd-machined[143379]: New machine qemu-2-instance-00000005.
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.438 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1f4b3f-2a8d-4192-bc34-46ea05f8f741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.459 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[26497d8e-ee02-4915-be1e-9ddcff03f811]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.484 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e09bf9-a379-4cba-998e-4091a6517a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.489 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[c1897cb1-5483-4f3b-a2e8-7a501da43fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 NetworkManager[982]: <info>  [1760003903.4899] manager: (tap48ce5fca-30): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct  9 09:58:23 compute-0 systemd-udevd[196255]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.516 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[0c979e87-75c0-4c59-8dd8-5373ea2b8755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.519 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca50a59-1cc1-4be0-afc9-2d2fb7fcf4ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 NetworkManager[982]: <info>  [1760003903.5360] device (tap48ce5fca-30): carrier: link connected
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.539 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[4572d655-41c5-473a-8986-9e906f0b872a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.558 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe426f9-10b8-4bac-960d-279f65cd86e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48ce5fca-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:a8:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 158806, 'reachable_time': 23224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 196281, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.571 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[1a34cb07-5776-4c18-adf5-c43debc911d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:a8ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 158806, 'tstamp': 158806}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 196282, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.589 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa4657c-b70f-49b8-a2ba-596f99567129]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48ce5fca-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:a8:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 158806, 'reachable_time': 23224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 196283, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.622 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[e1bdd394-f5c5-48ff-b35f-6a8ef187ac0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.679 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4ad214-7790-4e44-a7c1-e47d37cbb2e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.682 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48ce5fca-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.683 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.683 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48ce5fca-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:23 compute-0 kernel: tap48ce5fca-30: entered promiscuous mode
Oct  9 09:58:23 compute-0 NetworkManager[982]: <info>  [1760003903.6861] manager: (tap48ce5fca-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.690 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48ce5fca-30, col_values=(('external_ids', {'iface-id': 'b85a0af7-8e0c-4129-9420-36103d8f1eb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:23 compute-0 ovn_controller[83056]: 2025-10-09T09:58:23Z|00043|binding|INFO|Releasing lport b85a0af7-8e0c-4129-9420-36103d8f1eb6 from this chassis (sb_readonly=0)
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.693 92053 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48ce5fca-3386-4b8a-82e2-88fc71a94881.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48ce5fca-3386-4b8a-82e2-88fc71a94881.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.694 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4118c4-ed58-459d-81b5-1e97ba1cff45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.695 92053 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: global
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    log         /dev/log local0 debug
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    log-tag     haproxy-metadata-proxy-48ce5fca-3386-4b8a-82e2-88fc71a94881
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    user        root
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    group       root
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    maxconn     1024
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    pidfile     /var/lib/neutron/external/pids/48ce5fca-3386-4b8a-82e2-88fc71a94881.pid.haproxy
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    daemon
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: defaults
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    log global
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    mode http
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    option httplog
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    option dontlognull
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    option http-server-close
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    option forwardfor
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    retries                 3
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    timeout http-request    30s
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    timeout connect         30s
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    timeout client          32s
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    timeout server          32s
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    timeout http-keep-alive 30s
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: listen listener
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    bind 169.254.169.254:80
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]:    http-request add-header X-OVN-Network-ID 48ce5fca-3386-4b8a-82e2-88fc71a94881
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 09:58:23 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:23.697 92053 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'env', 'PROCESS_TAG=haproxy-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48ce5fca-3386-4b8a-82e2-88fc71a94881.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 09:58:23 compute-0 nova_compute[187439]: 2025-10-09 09:58:23.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:23.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:24 compute-0 podman[196311]: 2025-10-09 09:58:24.019724735 +0000 UTC m=+0.033190454 container create 531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  9 09:58:24 compute-0 systemd[1]: Started libpod-conmon-531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5.scope.
Oct  9 09:58:24 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f191b821c8ddcd0b2d60469844626212a8a30da06a201822c763678eb76a4c98/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:58:24 compute-0 podman[196311]: 2025-10-09 09:58:24.088684389 +0000 UTC m=+0.102150148 container init 531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:58:24 compute-0 podman[196311]: 2025-10-09 09:58:24.095737798 +0000 UTC m=+0.109203539 container start 531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:58:24 compute-0 podman[196311]: 2025-10-09 09:58:24.005016834 +0000 UTC m=+0.018482584 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:58:24 compute-0 podman[196323]: 2025-10-09 09:58:24.099770907 +0000 UTC m=+0.048911464 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 09:58:24 compute-0 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[196334]: [NOTICE]   (196345) : New worker (196347) forked
Oct  9 09:58:24 compute-0 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[196334]: [NOTICE]   (196345) : Loading success.
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.234 2 DEBUG nova.network.neutron [req-748a2bfe-4027-42a2-8bcd-d1434611c0ab req-a0acfaa9-d34e-4353-bb03-a65bc0b73189 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Updated VIF entry in instance network info cache for port eb8548dc-6635-4371-9e8f-c5b635941d12. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.235 2 DEBUG nova.network.neutron [req-748a2bfe-4027-42a2-8bcd-d1434611c0ab req-a0acfaa9-d34e-4353-bb03-a65bc0b73189 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Updating instance_info_cache with network_info: [{"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.240 2 DEBUG nova.compute.manager [req-ffd6dd43-3355-43c8-8938-eb2e02a2d732 req-b7ebc4ff-e46e-4c90-af77-f362c72f0138 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Received event network-vif-plugged-eb8548dc-6635-4371-9e8f-c5b635941d12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.241 2 DEBUG oslo_concurrency.lockutils [req-ffd6dd43-3355-43c8-8938-eb2e02a2d732 req-b7ebc4ff-e46e-4c90-af77-f362c72f0138 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.241 2 DEBUG oslo_concurrency.lockutils [req-ffd6dd43-3355-43c8-8938-eb2e02a2d732 req-b7ebc4ff-e46e-4c90-af77-f362c72f0138 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.241 2 DEBUG oslo_concurrency.lockutils [req-ffd6dd43-3355-43c8-8938-eb2e02a2d732 req-b7ebc4ff-e46e-4c90-af77-f362c72f0138 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.241 2 DEBUG nova.compute.manager [req-ffd6dd43-3355-43c8-8938-eb2e02a2d732 req-b7ebc4ff-e46e-4c90-af77-f362c72f0138 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Processing event network-vif-plugged-eb8548dc-6635-4371-9e8f-c5b635941d12 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.265 2 DEBUG oslo_concurrency.lockutils [req-748a2bfe-4027-42a2-8bcd-d1434611c0ab req-a0acfaa9-d34e-4353-bb03-a65bc0b73189 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-3ffc41de-d07a-40ee-a277-623db113eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.606 2 DEBUG nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.608 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760003904.6058412, 3ffc41de-d07a-40ee-a277-623db113eda1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.608 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] VM Started (Lifecycle Event)#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.612 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.614 2 INFO nova.virt.libvirt.driver [-] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Instance spawned successfully.#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.615 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.629 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.633 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.636 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.636 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.637 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.637 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.637 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.638 2 DEBUG nova.virt.libvirt.driver [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.664 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.664 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760003904.6083658, 3ffc41de-d07a-40ee-a277-623db113eda1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.665 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] VM Paused (Lifecycle Event)#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.686 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.688 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760003904.6112607, 3ffc41de-d07a-40ee-a277-623db113eda1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.688 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] VM Resumed (Lifecycle Event)#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.694 2 INFO nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Took 8.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.695 2 DEBUG nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.701 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.703 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.723 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.749 2 INFO nova.compute.manager [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Took 9.60 seconds to build instance.#033[00m
Oct  9 09:58:24 compute-0 nova_compute[187439]: 2025-10-09 09:58:24.768 2 DEBUG oslo_concurrency.lockutils [None req-ae58459c-e1a5-4c0c-b4c1-acea9c39d147 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v758: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 09:58:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:25.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:25.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:26 compute-0 nova_compute[187439]: 2025-10-09 09:58:26.299 2 DEBUG nova.compute.manager [req-97927879-0910-4c49-8caf-6809990cc17e req-8026e03e-82c5-4386-ab80-81315b57beea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Received event network-vif-plugged-eb8548dc-6635-4371-9e8f-c5b635941d12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:26 compute-0 nova_compute[187439]: 2025-10-09 09:58:26.299 2 DEBUG oslo_concurrency.lockutils [req-97927879-0910-4c49-8caf-6809990cc17e req-8026e03e-82c5-4386-ab80-81315b57beea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:26 compute-0 nova_compute[187439]: 2025-10-09 09:58:26.300 2 DEBUG oslo_concurrency.lockutils [req-97927879-0910-4c49-8caf-6809990cc17e req-8026e03e-82c5-4386-ab80-81315b57beea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:26 compute-0 nova_compute[187439]: 2025-10-09 09:58:26.300 2 DEBUG oslo_concurrency.lockutils [req-97927879-0910-4c49-8caf-6809990cc17e req-8026e03e-82c5-4386-ab80-81315b57beea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:26 compute-0 nova_compute[187439]: 2025-10-09 09:58:26.300 2 DEBUG nova.compute.manager [req-97927879-0910-4c49-8caf-6809990cc17e req-8026e03e-82c5-4386-ab80-81315b57beea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] No waiting events found dispatching network-vif-plugged-eb8548dc-6635-4371-9e8f-c5b635941d12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:58:26 compute-0 nova_compute[187439]: 2025-10-09 09:58:26.300 2 WARNING nova.compute.manager [req-97927879-0910-4c49-8caf-6809990cc17e req-8026e03e-82c5-4386-ab80-81315b57beea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Received unexpected event network-vif-plugged-eb8548dc-6635-4371-9e8f-c5b635941d12 for instance with vm_state active and task_state None.#033[00m
Oct  9 09:58:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:26 compute-0 nova_compute[187439]: 2025-10-09 09:58:26.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:26 compute-0 nova_compute[187439]: 2025-10-09 09:58:26.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v759: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  9 09:58:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:27.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:27.069Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:27.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:27.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:27 compute-0 NetworkManager[982]: <info>  [1760003907.3778] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct  9 09:58:27 compute-0 nova_compute[187439]: 2025-10-09 09:58:27.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:27 compute-0 NetworkManager[982]: <info>  [1760003907.3785] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct  9 09:58:27 compute-0 ovn_controller[83056]: 2025-10-09T09:58:27Z|00044|binding|INFO|Releasing lport b85a0af7-8e0c-4129-9420-36103d8f1eb6 from this chassis (sb_readonly=0)
Oct  9 09:58:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:27.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:27 compute-0 ovn_controller[83056]: 2025-10-09T09:58:27Z|00045|binding|INFO|Releasing lport b85a0af7-8e0c-4129-9420-36103d8f1eb6 from this chassis (sb_readonly=0)
Oct  9 09:58:27 compute-0 nova_compute[187439]: 2025-10-09 09:58:27.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:27 compute-0 nova_compute[187439]: 2025-10-09 09:58:27.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:27 compute-0 nova_compute[187439]: 2025-10-09 09:58:27.561 2 DEBUG nova.compute.manager [req-e247899d-42e3-4f2d-9a99-816acbf697fa req-7589a71b-c199-461d-b290-9d222115946f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Received event network-changed-eb8548dc-6635-4371-9e8f-c5b635941d12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:27 compute-0 nova_compute[187439]: 2025-10-09 09:58:27.562 2 DEBUG nova.compute.manager [req-e247899d-42e3-4f2d-9a99-816acbf697fa req-7589a71b-c199-461d-b290-9d222115946f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Refreshing instance network info cache due to event network-changed-eb8548dc-6635-4371-9e8f-c5b635941d12. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:58:27 compute-0 nova_compute[187439]: 2025-10-09 09:58:27.562 2 DEBUG oslo_concurrency.lockutils [req-e247899d-42e3-4f2d-9a99-816acbf697fa req-7589a71b-c199-461d-b290-9d222115946f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-3ffc41de-d07a-40ee-a277-623db113eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:58:27 compute-0 nova_compute[187439]: 2025-10-09 09:58:27.562 2 DEBUG oslo_concurrency.lockutils [req-e247899d-42e3-4f2d-9a99-816acbf697fa req-7589a71b-c199-461d-b290-9d222115946f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-3ffc41de-d07a-40ee-a277-623db113eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:58:27 compute-0 nova_compute[187439]: 2025-10-09 09:58:27.562 2 DEBUG nova.network.neutron [req-e247899d-42e3-4f2d-9a99-816acbf697fa req-7589a71b-c199-461d-b290-9d222115946f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Refreshing network info cache for port eb8548dc-6635-4371-9e8f-c5b635941d12 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:58:27 compute-0 podman[196398]: 2025-10-09 09:58:27.637822599 +0000 UTC m=+0.071781403 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  9 09:58:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:27.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:28.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:28.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:28.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:28.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v760: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  9 09:58:29 compute-0 nova_compute[187439]: 2025-10-09 09:58:29.173 2 DEBUG nova.network.neutron [req-e247899d-42e3-4f2d-9a99-816acbf697fa req-7589a71b-c199-461d-b290-9d222115946f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Updated VIF entry in instance network info cache for port eb8548dc-6635-4371-9e8f-c5b635941d12. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:58:29 compute-0 nova_compute[187439]: 2025-10-09 09:58:29.176 2 DEBUG nova.network.neutron [req-e247899d-42e3-4f2d-9a99-816acbf697fa req-7589a71b-c199-461d-b290-9d222115946f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Updating instance_info_cache with network_info: [{"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:58:29 compute-0 nova_compute[187439]: 2025-10-09 09:58:29.189 2 DEBUG oslo_concurrency.lockutils [req-e247899d-42e3-4f2d-9a99-816acbf697fa req-7589a71b-c199-461d-b290-9d222115946f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-3ffc41de-d07a-40ee-a277-623db113eda1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:58:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:29.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:29.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v761: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  9 09:58:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:31.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:31 compute-0 nova_compute[187439]: 2025-10-09 09:58:31.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:31.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:31 compute-0 nova_compute[187439]: 2025-10-09 09:58:31.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:32] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 09:58:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:32] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 09:58:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v762: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  9 09:58:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:33.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:33.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:58:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:58:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v763: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  9 09:58:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:35.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:35 compute-0 ovn_controller[83056]: 2025-10-09T09:58:35Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:d9:2e 10.100.0.14
Oct  9 09:58:35 compute-0 ovn_controller[83056]: 2025-10-09T09:58:35Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:d9:2e 10.100.0.14
Oct  9 09:58:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:35.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:36 compute-0 nova_compute[187439]: 2025-10-09 09:58:36.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:36 compute-0 nova_compute[187439]: 2025-10-09 09:58:36.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v764: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct  9 09:58:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:37.062Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:37.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:37.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:37.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:37.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:37 compute-0 podman[196451]: 2025-10-09 09:58:37.639387241 +0000 UTC m=+0.078392117 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:58:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:37.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:38.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:38.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:38.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:38.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v765: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  9 09:58:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:39.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:39.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v766: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  9 09:58:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:41.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:41 compute-0 nova_compute[187439]: 2025-10-09 09:58:41.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:41 compute-0 nova_compute[187439]: 2025-10-09 09:58:41.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:41 compute-0 nova_compute[187439]: 2025-10-09 09:58:41.912 2 INFO nova.compute.manager [None req-122a8f13-9149-4b5d-be54-36cd0ac619cb 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Get console output#033[00m
Oct  9 09:58:41 compute-0 nova_compute[187439]: 2025-10-09 09:58:41.917 589 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.163 2 DEBUG oslo_concurrency.lockutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "3ffc41de-d07a-40ee-a277-623db113eda1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.163 2 DEBUG oslo_concurrency.lockutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.164 2 DEBUG oslo_concurrency.lockutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.164 2 DEBUG oslo_concurrency.lockutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.164 2 DEBUG oslo_concurrency.lockutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.165 2 INFO nova.compute.manager [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Terminating instance#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.165 2 DEBUG nova.compute.manager [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 09:58:42 compute-0 kernel: tapeb8548dc-66 (unregistering): left promiscuous mode
Oct  9 09:58:42 compute-0 NetworkManager[982]: <info>  [1760003922.2019] device (tapeb8548dc-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-0 ovn_controller[83056]: 2025-10-09T09:58:42Z|00046|binding|INFO|Releasing lport eb8548dc-6635-4371-9e8f-c5b635941d12 from this chassis (sb_readonly=0)
Oct  9 09:58:42 compute-0 ovn_controller[83056]: 2025-10-09T09:58:42Z|00047|binding|INFO|Setting lport eb8548dc-6635-4371-9e8f-c5b635941d12 down in Southbound
Oct  9 09:58:42 compute-0 ovn_controller[83056]: 2025-10-09T09:58:42Z|00048|binding|INFO|Removing iface tapeb8548dc-66 ovn-installed in OVS
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.218 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:d9:2e 10.100.0.14'], port_security=['fa:16:3e:53:d9:2e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3ffc41de-d07a-40ee-a277-623db113eda1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cac7953f-e25e-4486-a0fa-c6cbcac2f8ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6496ebe5-cfc3-4a35-b1e6-27021c277fad, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], logical_port=eb8548dc-6635-4371-9e8f-c5b635941d12) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.219 92053 INFO neutron.agent.ovn.metadata.agent [-] Port eb8548dc-6635-4371-9e8f-c5b635941d12 in datapath 48ce5fca-3386-4b8a-82e2-88fc71a94881 unbound from our chassis#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.220 92053 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48ce5fca-3386-4b8a-82e2-88fc71a94881, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.223 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[950124a8-6b0a-4e05-afe9-ac221e568120]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.224 92053 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 namespace which is not needed anymore#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct  9 09:58:42 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 11.613s CPU time.
Oct  9 09:58:42 compute-0 systemd-machined[143379]: Machine qemu-2-instance-00000005 terminated.
Oct  9 09:58:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:42] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 09:58:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:42] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 09:58:42 compute-0 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[196334]: [NOTICE]   (196345) : haproxy version is 2.8.14-c23fe91
Oct  9 09:58:42 compute-0 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[196334]: [NOTICE]   (196345) : path to executable is /usr/sbin/haproxy
Oct  9 09:58:42 compute-0 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[196334]: [WARNING]  (196345) : Exiting Master process...
Oct  9 09:58:42 compute-0 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[196334]: [ALERT]    (196345) : Current worker (196347) exited with code 143 (Terminated)
Oct  9 09:58:42 compute-0 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[196334]: [WARNING]  (196345) : All workers exited. Exiting... (0)
Oct  9 09:58:42 compute-0 systemd[1]: libpod-531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5.scope: Deactivated successfully.
Oct  9 09:58:42 compute-0 conmon[196334]: conmon 531e3ff5c6c8b1edaaac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5.scope/container/memory.events
Oct  9 09:58:42 compute-0 podman[196501]: 2025-10-09 09:58:42.338587756 +0000 UTC m=+0.037213606 container died 531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f191b821c8ddcd0b2d60469844626212a8a30da06a201822c763678eb76a4c98-merged.mount: Deactivated successfully.
Oct  9 09:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5-userdata-shm.mount: Deactivated successfully.
Oct  9 09:58:42 compute-0 podman[196501]: 2025-10-09 09:58:42.36293614 +0000 UTC m=+0.061561969 container cleanup 531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:58:42 compute-0 systemd[1]: libpod-conmon-531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5.scope: Deactivated successfully.
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.390 2 INFO nova.virt.libvirt.driver [-] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Instance destroyed successfully.#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.390 2 DEBUG nova.objects.instance [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid 3ffc41de-d07a-40ee-a277-623db113eda1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.400 2 DEBUG nova.virt.libvirt.vif [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:58:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-172706500',display_name='tempest-TestNetworkBasicOps-server-172706500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-172706500',id=5,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDJSTqHJWrIfo1mN4bEVd0fVRgjxQS25gZjKH3NGwwQ9zKdcgq9+vWhuZvoPJGs0R+tT7AFviVN5gsk0ZZjp6J4sC0r1KbTYRYWw3Ckg2zIuat+ZsSbwAmmmI+FmlZx13w==',key_name='tempest-TestNetworkBasicOps-233590586',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:58:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mapx1vd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:58:24Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=3ffc41de-d07a-40ee-a277-623db113eda1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.400 2 DEBUG nova.network.os_vif_util [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "eb8548dc-6635-4371-9e8f-c5b635941d12", "address": "fa:16:3e:53:d9:2e", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8548dc-66", "ovs_interfaceid": "eb8548dc-6635-4371-9e8f-c5b635941d12", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.402 2 DEBUG nova.network.os_vif_util [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=eb8548dc-6635-4371-9e8f-c5b635941d12,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8548dc-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.402 2 DEBUG os_vif [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=eb8548dc-6635-4371-9e8f-c5b635941d12,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8548dc-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.404 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb8548dc-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.408 2 INFO os_vif [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=eb8548dc-6635-4371-9e8f-c5b635941d12,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8548dc-66')#033[00m
Oct  9 09:58:42 compute-0 podman[196534]: 2025-10-09 09:58:42.416689495 +0000 UTC m=+0.033845358 container remove 531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.426 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[253b2082-c860-4252-9a46-abbf152aa8a5]: (4, ('Thu Oct  9 09:58:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 (531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5)\n531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5\nThu Oct  9 09:58:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 (531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5)\n531e3ff5c6c8b1edaaac42429ef4b94bf816667b1e3888e5636353fe05a29de5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.427 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[ff054281-d4a1-46bd-a40f-37ae3ab39c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.428 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48ce5fca-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:42 compute-0 kernel: tap48ce5fca-30: left promiscuous mode
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.435 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[36ca0e8c-1bda-4b3f-90db-220ce3999e87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.455 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8883dd-0bb0-4fd5-b773-106833d6f55e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.456 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb778fd-d093-4d0e-b990-8e0b5e3b615e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.470 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[e165d91b-05c8-4c1f-858b-fa7a75496942]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 158801, 'reachable_time': 41068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 196574, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d48ce5fca\x2d3386\x2d4b8a\x2d82e2\x2d88fc71a94881.mount: Deactivated successfully.
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.472 92357 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 09:58:42 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:58:42.472 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[64d7f743-875f-4551-bf6a-810d715ef6ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.615 2 INFO nova.virt.libvirt.driver [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Deleting instance files /var/lib/nova/instances/3ffc41de-d07a-40ee-a277-623db113eda1_del#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.616 2 INFO nova.virt.libvirt.driver [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Deletion of /var/lib/nova/instances/3ffc41de-d07a-40ee-a277-623db113eda1_del complete#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.655 2 INFO nova.compute.manager [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Took 0.49 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.656 2 DEBUG oslo.service.loopingcall [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.656 2 DEBUG nova.compute.manager [-] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 09:58:42 compute-0 nova_compute[187439]: 2025-10-09 09:58:42.656 2 DEBUG nova.network.neutron [-] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 09:58:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v767: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.036 2 DEBUG nova.compute.manager [req-bf7e26a8-d833-4d4f-a44c-ca659e57fb92 req-425698fe-355c-45f9-bdf0-5bcfe61c39c5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Received event network-vif-unplugged-eb8548dc-6635-4371-9e8f-c5b635941d12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.037 2 DEBUG oslo_concurrency.lockutils [req-bf7e26a8-d833-4d4f-a44c-ca659e57fb92 req-425698fe-355c-45f9-bdf0-5bcfe61c39c5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.037 2 DEBUG oslo_concurrency.lockutils [req-bf7e26a8-d833-4d4f-a44c-ca659e57fb92 req-425698fe-355c-45f9-bdf0-5bcfe61c39c5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.037 2 DEBUG oslo_concurrency.lockutils [req-bf7e26a8-d833-4d4f-a44c-ca659e57fb92 req-425698fe-355c-45f9-bdf0-5bcfe61c39c5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.037 2 DEBUG nova.compute.manager [req-bf7e26a8-d833-4d4f-a44c-ca659e57fb92 req-425698fe-355c-45f9-bdf0-5bcfe61c39c5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] No waiting events found dispatching network-vif-unplugged-eb8548dc-6635-4371-9e8f-c5b635941d12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.037 2 DEBUG nova.compute.manager [req-bf7e26a8-d833-4d4f-a44c-ca659e57fb92 req-425698fe-355c-45f9-bdf0-5bcfe61c39c5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Received event network-vif-unplugged-eb8548dc-6635-4371-9e8f-c5b635941d12 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  9 09:58:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:43.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.489 2 DEBUG nova.network.neutron [-] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.499 2 INFO nova.compute.manager [-] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Took 0.84 seconds to deallocate network for instance.#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.530 2 DEBUG oslo_concurrency.lockutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.531 2 DEBUG oslo_concurrency.lockutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.549 2 DEBUG nova.compute.manager [req-bf9273c5-c6d8-482d-b085-95f463d00c67 req-2682ba5e-0bbb-47cd-a425-354bd54ff6de b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Received event network-vif-deleted-eb8548dc-6635-4371-9e8f-c5b635941d12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.570 2 DEBUG oslo_concurrency.processutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:43.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:58:43 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2037300683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.921 2 DEBUG oslo_concurrency.processutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.926 2 DEBUG nova.compute.provider_tree [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.940 2 DEBUG nova.scheduler.client.report [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.952 2 DEBUG oslo_concurrency.lockutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:43 compute-0 nova_compute[187439]: 2025-10-09 09:58:43.970 2 INFO nova.scheduler.client.report [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance 3ffc41de-d07a-40ee-a277-623db113eda1#033[00m
Oct  9 09:58:44 compute-0 nova_compute[187439]: 2025-10-09 09:58:44.012 2 DEBUG oslo_concurrency.lockutils [None req-c8a9d675-68b2-4e2e-8695-e29a53fc4750 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v768: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  9 09:58:45 compute-0 nova_compute[187439]: 2025-10-09 09:58:45.098 2 DEBUG nova.compute.manager [req-61bf71bc-91e7-480e-8554-707a2f5e8128 req-6c0062ba-4ca5-41c9-936d-f38c1280159a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Received event network-vif-plugged-eb8548dc-6635-4371-9e8f-c5b635941d12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:45 compute-0 nova_compute[187439]: 2025-10-09 09:58:45.099 2 DEBUG oslo_concurrency.lockutils [req-61bf71bc-91e7-480e-8554-707a2f5e8128 req-6c0062ba-4ca5-41c9-936d-f38c1280159a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:45 compute-0 nova_compute[187439]: 2025-10-09 09:58:45.099 2 DEBUG oslo_concurrency.lockutils [req-61bf71bc-91e7-480e-8554-707a2f5e8128 req-6c0062ba-4ca5-41c9-936d-f38c1280159a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:45 compute-0 nova_compute[187439]: 2025-10-09 09:58:45.099 2 DEBUG oslo_concurrency.lockutils [req-61bf71bc-91e7-480e-8554-707a2f5e8128 req-6c0062ba-4ca5-41c9-936d-f38c1280159a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "3ffc41de-d07a-40ee-a277-623db113eda1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:45 compute-0 nova_compute[187439]: 2025-10-09 09:58:45.099 2 DEBUG nova.compute.manager [req-61bf71bc-91e7-480e-8554-707a2f5e8128 req-6c0062ba-4ca5-41c9-936d-f38c1280159a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] No waiting events found dispatching network-vif-plugged-eb8548dc-6635-4371-9e8f-c5b635941d12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:58:45 compute-0 nova_compute[187439]: 2025-10-09 09:58:45.100 2 WARNING nova.compute.manager [req-61bf71bc-91e7-480e-8554-707a2f5e8128 req-6c0062ba-4ca5-41c9-936d-f38c1280159a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Received unexpected event network-vif-plugged-eb8548dc-6635-4371-9e8f-c5b635941d12 for instance with vm_state deleted and task_state None.#033[00m
Oct  9 09:58:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:45.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:45.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:46 compute-0 nova_compute[187439]: 2025-10-09 09:58:46.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v769: 337 pgs: 337 active+clean; 48 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct  9 09:58:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:47.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:47.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:47.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:47.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:47 compute-0 nova_compute[187439]: 2025-10-09 09:58:47.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:47.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:47.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:48.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:48.904Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:48.905Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:48.905Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v770: 337 pgs: 337 active+clean; 48 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct  9 09:58:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:49.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:58:49
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'images', '.mgr']
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:58:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:58:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:58:49 compute-0 podman[196605]: 2025-10-09 09:58:49.621878924 +0000 UTC m=+0.054436683 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:58:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:58:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:49.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:49 compute-0 nova_compute[187439]: 2025-10-09 09:58:49.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:49 compute-0 nova_compute[187439]: 2025-10-09 09:58:49.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v771: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Oct  9 09:58:51 compute-0 nova_compute[187439]: 2025-10-09 09:58:51.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:51.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:51 compute-0 nova_compute[187439]: 2025-10-09 09:58:51.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:51.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:52 compute-0 nova_compute[187439]: 2025-10-09 09:58:52.243 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:52] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Oct  9 09:58:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:58:52] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Oct  9 09:58:52 compute-0 nova_compute[187439]: 2025-10-09 09:58:52.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v772: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct  9 09:58:53 compute-0 nova_compute[187439]: 2025-10-09 09:58:53.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:53 compute-0 nova_compute[187439]: 2025-10-09 09:58:53.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:53.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:53.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.263 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.264 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.264 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.264 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.264 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:58:54 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/474588185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.630 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:54 compute-0 podman[196674]: 2025-10-09 09:58:54.641299807 +0000 UTC m=+0.076656398 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.873 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.875 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4705MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.875 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.875 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.924 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.925 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:58:54 compute-0 nova_compute[187439]: 2025-10-09 09:58:54.938 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v773: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct  9 09:58:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:58:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3921144099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:58:55 compute-0 nova_compute[187439]: 2025-10-09 09:58:55.318 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:55 compute-0 nova_compute[187439]: 2025-10-09 09:58:55.323 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:58:55 compute-0 nova_compute[187439]: 2025-10-09 09:58:55.335 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:58:55 compute-0 nova_compute[187439]: 2025-10-09 09:58:55.347 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:58:55 compute-0 nova_compute[187439]: 2025-10-09 09:58:55.348 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:55.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:55.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:56 compute-0 nova_compute[187439]: 2025-10-09 09:58:56.348 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:56 compute-0 nova_compute[187439]: 2025-10-09 09:58:56.349 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:58:56 compute-0 nova_compute[187439]: 2025-10-09 09:58:56.349 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:58:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:56 compute-0 nova_compute[187439]: 2025-10-09 09:58:56.365 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:58:56 compute-0 nova_compute[187439]: 2025-10-09 09:58:56.365 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:56 compute-0 nova_compute[187439]: 2025-10-09 09:58:56.366 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:56 compute-0 nova_compute[187439]: 2025-10-09 09:58:56.366 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:56 compute-0 nova_compute[187439]: 2025-10-09 09:58:56.366 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:58:56 compute-0 nova_compute[187439]: 2025-10-09 09:58:56.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v774: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Oct  9 09:58:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:57.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:57.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:57.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:57.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:57 compute-0 nova_compute[187439]: 2025-10-09 09:58:57.260 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:57 compute-0 nova_compute[187439]: 2025-10-09 09:58:57.386 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760003922.3853574, 3ffc41de-d07a-40ee-a277-623db113eda1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:58:57 compute-0 nova_compute[187439]: 2025-10-09 09:58:57.386 2 INFO nova.compute.manager [-] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] VM Stopped (Lifecycle Event)#033[00m
Oct  9 09:58:57 compute-0 nova_compute[187439]: 2025-10-09 09:58:57.404 2 DEBUG nova.compute.manager [None req-ab8db371-16f0-400f-a112-032e1465cac6 - - - - - -] [instance: 3ffc41de-d07a-40ee-a277-623db113eda1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:58:57 compute-0 nova_compute[187439]: 2025-10-09 09:58:57.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:57.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:57.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:58:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:58 compute-0 podman[196719]: 2025-10-09 09:58:58.607672624 +0000 UTC m=+0.049854250 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  9 09:58:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:58.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:58.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:58.907Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:58:58.907Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v775: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:58:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:58:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:58:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:59.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v776: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s
Oct  9 09:59:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:01 compute-0 nova_compute[187439]: 2025-10-09 09:59:01.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:01.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:02] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Oct  9 09:59:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:02] "GET /metrics HTTP/1.1" 200 48527 "" "Prometheus/2.51.0"
Oct  9 09:59:02 compute-0 nova_compute[187439]: 2025-10-09 09:59:02.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v777: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:59:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000018s ======
Oct  9 09:59:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:03.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000018s
Oct  9 09:59:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:03.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:59:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:59:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v778: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 09:59:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:05.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:05.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:06 compute-0 nova_compute[187439]: 2025-10-09 09:59:06.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:07 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v779: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct  9 09:59:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:07.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:07.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:07.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:07.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:07 compute-0 nova_compute[187439]: 2025-10-09 09:59:07.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:07.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:07.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:08 compute-0 podman[196746]: 2025-10-09 09:59:08.636088204 +0000 UTC m=+0.074098817 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 09:59:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:08.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:08.907Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:08.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:08.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:09 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v780: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  9 09:59:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:59:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:09.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:59:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:59:10.111 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:59:10.113 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:59:10.113 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v781: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct  9 09:59:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:59:11.124 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:59:11 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:59:11.126 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:59:11 compute-0 nova_compute[187439]: 2025-10-09 09:59:11.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:11.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:11 compute-0 nova_compute[187439]: 2025-10-09 09:59:11.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:11.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:12] "GET /metrics HTTP/1.1" 200 48521 "" "Prometheus/2.51.0"
Oct  9 09:59:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:12] "GET /metrics HTTP/1.1" 200 48521 "" "Prometheus/2.51.0"
Oct  9 09:59:12 compute-0 nova_compute[187439]: 2025-10-09 09:59:12.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v782: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct  9 09:59:13 compute-0 ovn_metadata_agent[92048]: 2025-10-09 09:59:13.128 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:13.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:13.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v783: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct  9 09:59:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:15.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 09:59:16 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 09:59:16 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:16 compute-0 nova_compute[187439]: 2025-10-09 09:59:16.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 09:59:16 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 09:59:16 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v784: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct  9 09:59:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:17.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:17.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:17.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:17.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:59:17 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:59:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 09:59:17 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:59:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v785: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Oct  9 09:59:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 09:59:17 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 09:59:17 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 09:59:17 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 09:59:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 09:59:17 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:59:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 09:59:17 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 09:59:17 compute-0 nova_compute[187439]: 2025-10-09 09:59:17.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:17.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:59:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:59:17 compute-0 podman[196963]: 2025-10-09 09:59:17.832176744 +0000 UTC m=+0.035406349 container create c4431482c04855ba95f9e9929852ebb1e41d82c810ef9d243855b5f6ca6b0ebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:59:17 compute-0 systemd[1]: Started libpod-conmon-c4431482c04855ba95f9e9929852ebb1e41d82c810ef9d243855b5f6ca6b0ebb.scope.
Oct  9 09:59:17 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:59:17 compute-0 podman[196963]: 2025-10-09 09:59:17.896943099 +0000 UTC m=+0.100172714 container init c4431482c04855ba95f9e9929852ebb1e41d82c810ef9d243855b5f6ca6b0ebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:59:17 compute-0 podman[196963]: 2025-10-09 09:59:17.901834544 +0000 UTC m=+0.105064139 container start c4431482c04855ba95f9e9929852ebb1e41d82c810ef9d243855b5f6ca6b0ebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  9 09:59:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:17 compute-0 podman[196963]: 2025-10-09 09:59:17.904449962 +0000 UTC m=+0.107679556 container attach c4431482c04855ba95f9e9929852ebb1e41d82c810ef9d243855b5f6ca6b0ebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:59:17 compute-0 affectionate_mahavira[196976]: 167 167
Oct  9 09:59:17 compute-0 systemd[1]: libpod-c4431482c04855ba95f9e9929852ebb1e41d82c810ef9d243855b5f6ca6b0ebb.scope: Deactivated successfully.
Oct  9 09:59:17 compute-0 podman[196963]: 2025-10-09 09:59:17.906751498 +0000 UTC m=+0.109981093 container died c4431482c04855ba95f9e9929852ebb1e41d82c810ef9d243855b5f6ca6b0ebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 09:59:17 compute-0 podman[196963]: 2025-10-09 09:59:17.817480916 +0000 UTC m=+0.020710531 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-74ddf1b6e280b63fc3be6dfd98096a4e15f3461e1a8bf6b4183ac1a0c3e71661-merged.mount: Deactivated successfully.
Oct  9 09:59:17 compute-0 podman[196963]: 2025-10-09 09:59:17.930640227 +0000 UTC m=+0.133869821 container remove c4431482c04855ba95f9e9929852ebb1e41d82c810ef9d243855b5f6ca6b0ebb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 09:59:17 compute-0 systemd[1]: libpod-conmon-c4431482c04855ba95f9e9929852ebb1e41d82c810ef9d243855b5f6ca6b0ebb.scope: Deactivated successfully.
Oct  9 09:59:18 compute-0 podman[196998]: 2025-10-09 09:59:18.073250494 +0000 UTC m=+0.036318919 container create 44fa236e6ea613d7008a66e523aff96401f7a7fb24842cd24b2ae5d362d9af91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:59:18 compute-0 systemd[1]: Started libpod-conmon-44fa236e6ea613d7008a66e523aff96401f7a7fb24842cd24b2ae5d362d9af91.scope.
Oct  9 09:59:18 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74722392cb6b0317dcaa42d50a72f41fc53090eaa9f5fcf216380a5e138af388/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74722392cb6b0317dcaa42d50a72f41fc53090eaa9f5fcf216380a5e138af388/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74722392cb6b0317dcaa42d50a72f41fc53090eaa9f5fcf216380a5e138af388/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74722392cb6b0317dcaa42d50a72f41fc53090eaa9f5fcf216380a5e138af388/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74722392cb6b0317dcaa42d50a72f41fc53090eaa9f5fcf216380a5e138af388/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:18 compute-0 podman[196998]: 2025-10-09 09:59:18.146076903 +0000 UTC m=+0.109145348 container init 44fa236e6ea613d7008a66e523aff96401f7a7fb24842cd24b2ae5d362d9af91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shirley, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:59:18 compute-0 podman[196998]: 2025-10-09 09:59:18.152883848 +0000 UTC m=+0.115952273 container start 44fa236e6ea613d7008a66e523aff96401f7a7fb24842cd24b2ae5d362d9af91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:59:18 compute-0 podman[196998]: 2025-10-09 09:59:18.057273261 +0000 UTC m=+0.020341706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:59:18 compute-0 podman[196998]: 2025-10-09 09:59:18.154646689 +0000 UTC m=+0.117715115 container attach 44fa236e6ea613d7008a66e523aff96401f7a7fb24842cd24b2ae5d362d9af91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:59:18 compute-0 epic_shirley[197013]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:59:18 compute-0 epic_shirley[197013]: --> All data devices are unavailable
Oct  9 09:59:18 compute-0 systemd[1]: libpod-44fa236e6ea613d7008a66e523aff96401f7a7fb24842cd24b2ae5d362d9af91.scope: Deactivated successfully.
Oct  9 09:59:18 compute-0 conmon[197013]: conmon 44fa236e6ea613d7008a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44fa236e6ea613d7008a66e523aff96401f7a7fb24842cd24b2ae5d362d9af91.scope/container/memory.events
Oct  9 09:59:18 compute-0 podman[196998]: 2025-10-09 09:59:18.443952071 +0000 UTC m=+0.407020496 container died 44fa236e6ea613d7008a66e523aff96401f7a7fb24842cd24b2ae5d362d9af91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shirley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:59:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-74722392cb6b0317dcaa42d50a72f41fc53090eaa9f5fcf216380a5e138af388-merged.mount: Deactivated successfully.
Oct  9 09:59:18 compute-0 podman[196998]: 2025-10-09 09:59:18.47086686 +0000 UTC m=+0.433935286 container remove 44fa236e6ea613d7008a66e523aff96401f7a7fb24842cd24b2ae5d362d9af91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:59:18 compute-0 systemd[1]: libpod-conmon-44fa236e6ea613d7008a66e523aff96401f7a7fb24842cd24b2ae5d362d9af91.scope: Deactivated successfully.
Oct  9 09:59:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:18.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:18.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:18.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:18.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:18 compute-0 podman[197121]: 2025-10-09 09:59:18.978554698 +0000 UTC m=+0.031531258 container create 44457ecde0e6fb5b76b0f4701833751d62507d3394cf850981609255c2f9f333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_neumann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:59:19 compute-0 systemd[1]: Started libpod-conmon-44457ecde0e6fb5b76b0f4701833751d62507d3394cf850981609255c2f9f333.scope.
Oct  9 09:59:19 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:59:19 compute-0 podman[197121]: 2025-10-09 09:59:19.040770788 +0000 UTC m=+0.093747358 container init 44457ecde0e6fb5b76b0f4701833751d62507d3394cf850981609255c2f9f333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:59:19 compute-0 podman[197121]: 2025-10-09 09:59:19.048668337 +0000 UTC m=+0.101644897 container start 44457ecde0e6fb5b76b0f4701833751d62507d3394cf850981609255c2f9f333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:59:19 compute-0 pensive_neumann[197134]: 167 167
Oct  9 09:59:19 compute-0 podman[197121]: 2025-10-09 09:59:19.051715148 +0000 UTC m=+0.104691708 container attach 44457ecde0e6fb5b76b0f4701833751d62507d3394cf850981609255c2f9f333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:59:19 compute-0 systemd[1]: libpod-44457ecde0e6fb5b76b0f4701833751d62507d3394cf850981609255c2f9f333.scope: Deactivated successfully.
Oct  9 09:59:19 compute-0 podman[197121]: 2025-10-09 09:59:19.055669838 +0000 UTC m=+0.108646399 container died 44457ecde0e6fb5b76b0f4701833751d62507d3394cf850981609255c2f9f333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_neumann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:59:19 compute-0 podman[197121]: 2025-10-09 09:59:18.96604516 +0000 UTC m=+0.019021740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ff49a6480d1f8e03456d8125df0672eea717bae82f03fb4821427167561cb92-merged.mount: Deactivated successfully.
Oct  9 09:59:19 compute-0 podman[197121]: 2025-10-09 09:59:19.079624711 +0000 UTC m=+0.132601271 container remove 44457ecde0e6fb5b76b0f4701833751d62507d3394cf850981609255c2f9f333 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:59:19 compute-0 systemd[1]: libpod-conmon-44457ecde0e6fb5b76b0f4701833751d62507d3394cf850981609255c2f9f333.scope: Deactivated successfully.
Oct  9 09:59:19 compute-0 podman[197157]: 2025-10-09 09:59:19.219169263 +0000 UTC m=+0.036711638 container create ab4f065dd506748b07b1700809c289195d7bc7e81958e36622d2ddf5b62a1f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lumiere, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:59:19 compute-0 systemd[1]: Started libpod-conmon-ab4f065dd506748b07b1700809c289195d7bc7e81958e36622d2ddf5b62a1f7f.scope.
Oct  9 09:59:19 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894afef092e90680071e97d971b9a7138cf22bb694ab0a158d2c3db188ee6c23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894afef092e90680071e97d971b9a7138cf22bb694ab0a158d2c3db188ee6c23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894afef092e90680071e97d971b9a7138cf22bb694ab0a158d2c3db188ee6c23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894afef092e90680071e97d971b9a7138cf22bb694ab0a158d2c3db188ee6c23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:19 compute-0 podman[197157]: 2025-10-09 09:59:19.293158604 +0000 UTC m=+0.110700989 container init ab4f065dd506748b07b1700809c289195d7bc7e81958e36622d2ddf5b62a1f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 09:59:19 compute-0 podman[197157]: 2025-10-09 09:59:19.298271588 +0000 UTC m=+0.115813953 container start ab4f065dd506748b07b1700809c289195d7bc7e81958e36622d2ddf5b62a1f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lumiere, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:59:19 compute-0 podman[197157]: 2025-10-09 09:59:19.29946208 +0000 UTC m=+0.117004455 container attach ab4f065dd506748b07b1700809c289195d7bc7e81958e36622d2ddf5b62a1f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:59:19 compute-0 podman[197157]: 2025-10-09 09:59:19.205270346 +0000 UTC m=+0.022812732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:59:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v786: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Oct  9 09:59:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:19.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:19 compute-0 nice_lumiere[197170]: {
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:    "1": [
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:        {
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "devices": [
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "/dev/loop3"
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            ],
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "lv_name": "ceph_lv0",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "lv_size": "21470642176",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "name": "ceph_lv0",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "tags": {
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.cluster_name": "ceph",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.crush_device_class": "",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.encrypted": "0",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.osd_id": "1",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.type": "block",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.vdo": "0",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:                "ceph.with_tpm": "0"
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            },
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "type": "block",
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:            "vg_name": "ceph_vg0"
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:        }
Oct  9 09:59:19 compute-0 nice_lumiere[197170]:    ]
Oct  9 09:59:19 compute-0 nice_lumiere[197170]: }
Oct  9 09:59:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:59:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:59:19 compute-0 systemd[1]: libpod-ab4f065dd506748b07b1700809c289195d7bc7e81958e36622d2ddf5b62a1f7f.scope: Deactivated successfully.
Oct  9 09:59:19 compute-0 conmon[197170]: conmon ab4f065dd506748b07b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab4f065dd506748b07b1700809c289195d7bc7e81958e36622d2ddf5b62a1f7f.scope/container/memory.events
Oct  9 09:59:19 compute-0 podman[197157]: 2025-10-09 09:59:19.578513392 +0000 UTC m=+0.396055757 container died ab4f065dd506748b07b1700809c289195d7bc7e81958e36622d2ddf5b62a1f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-894afef092e90680071e97d971b9a7138cf22bb694ab0a158d2c3db188ee6c23-merged.mount: Deactivated successfully.
Oct  9 09:59:19 compute-0 podman[197157]: 2025-10-09 09:59:19.608476574 +0000 UTC m=+0.426018939 container remove ab4f065dd506748b07b1700809c289195d7bc7e81958e36622d2ddf5b62a1f7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 09:59:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:59:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:59:19 compute-0 systemd[1]: libpod-conmon-ab4f065dd506748b07b1700809c289195d7bc7e81958e36622d2ddf5b62a1f7f.scope: Deactivated successfully.
Oct  9 09:59:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:59:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:59:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:59:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:59:19 compute-0 podman[197212]: 2025-10-09 09:59:19.790018702 +0000 UTC m=+0.056639754 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  9 09:59:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:19.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:20 compute-0 podman[197288]: 2025-10-09 09:59:20.176686551 +0000 UTC m=+0.036867232 container create daf0158da0f616bec5e4fbcaf29791486e4608693ef7f5f7cd9fe829a01d58d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_burnell, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:59:20 compute-0 systemd[1]: Started libpod-conmon-daf0158da0f616bec5e4fbcaf29791486e4608693ef7f5f7cd9fe829a01d58d4.scope.
Oct  9 09:59:20 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:59:20 compute-0 podman[197288]: 2025-10-09 09:59:20.241629838 +0000 UTC m=+0.101810509 container init daf0158da0f616bec5e4fbcaf29791486e4608693ef7f5f7cd9fe829a01d58d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:59:20 compute-0 podman[197288]: 2025-10-09 09:59:20.247092871 +0000 UTC m=+0.107273542 container start daf0158da0f616bec5e4fbcaf29791486e4608693ef7f5f7cd9fe829a01d58d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_burnell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:59:20 compute-0 zen_burnell[197301]: 167 167
Oct  9 09:59:20 compute-0 systemd[1]: libpod-daf0158da0f616bec5e4fbcaf29791486e4608693ef7f5f7cd9fe829a01d58d4.scope: Deactivated successfully.
Oct  9 09:59:20 compute-0 conmon[197301]: conmon daf0158da0f616bec5e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-daf0158da0f616bec5e4fbcaf29791486e4608693ef7f5f7cd9fe829a01d58d4.scope/container/memory.events
Oct  9 09:59:20 compute-0 podman[197288]: 2025-10-09 09:59:20.248252175 +0000 UTC m=+0.108432846 container attach daf0158da0f616bec5e4fbcaf29791486e4608693ef7f5f7cd9fe829a01d58d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 09:59:20 compute-0 podman[197288]: 2025-10-09 09:59:20.25350937 +0000 UTC m=+0.113690042 container died daf0158da0f616bec5e4fbcaf29791486e4608693ef7f5f7cd9fe829a01d58d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Oct  9 09:59:20 compute-0 podman[197288]: 2025-10-09 09:59:20.16339589 +0000 UTC m=+0.023576581 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-60bfbc44b68c3e7a7ab21a150921f96a2fc12b800fe0a265cf6311a31b3d6abb-merged.mount: Deactivated successfully.
Oct  9 09:59:20 compute-0 podman[197288]: 2025-10-09 09:59:20.278733614 +0000 UTC m=+0.138914285 container remove daf0158da0f616bec5e4fbcaf29791486e4608693ef7f5f7cd9fe829a01d58d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_burnell, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:59:20 compute-0 systemd[1]: libpod-conmon-daf0158da0f616bec5e4fbcaf29791486e4608693ef7f5f7cd9fe829a01d58d4.scope: Deactivated successfully.
Oct  9 09:59:20 compute-0 podman[197323]: 2025-10-09 09:59:20.419690898 +0000 UTC m=+0.034077085 container create 22619b4eb7b37897b61b456666857533b7e6af5f67058f38005b51ca8fc13a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 09:59:20 compute-0 systemd[1]: Started libpod-conmon-22619b4eb7b37897b61b456666857533b7e6af5f67058f38005b51ca8fc13a16.scope.
Oct  9 09:59:20 compute-0 systemd[1]: Started libcrun container.
Oct  9 09:59:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8078d9cf5f127afb35fe9fb2b56bd621a342777d1c5d630e3e901f27ac07c041/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8078d9cf5f127afb35fe9fb2b56bd621a342777d1c5d630e3e901f27ac07c041/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8078d9cf5f127afb35fe9fb2b56bd621a342777d1c5d630e3e901f27ac07c041/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8078d9cf5f127afb35fe9fb2b56bd621a342777d1c5d630e3e901f27ac07c041/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:20 compute-0 podman[197323]: 2025-10-09 09:59:20.483657655 +0000 UTC m=+0.098043852 container init 22619b4eb7b37897b61b456666857533b7e6af5f67058f38005b51ca8fc13a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pike, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:59:20 compute-0 podman[197323]: 2025-10-09 09:59:20.491081112 +0000 UTC m=+0.105467288 container start 22619b4eb7b37897b61b456666857533b7e6af5f67058f38005b51ca8fc13a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pike, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:59:20 compute-0 podman[197323]: 2025-10-09 09:59:20.493334096 +0000 UTC m=+0.107720283 container attach 22619b4eb7b37897b61b456666857533b7e6af5f67058f38005b51ca8fc13a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pike, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 09:59:20 compute-0 podman[197323]: 2025-10-09 09:59:20.40786124 +0000 UTC m=+0.022247437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:59:21 compute-0 funny_pike[197336]: {}
Oct  9 09:59:21 compute-0 lvm[197414]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:59:21 compute-0 lvm[197414]: VG ceph_vg0 finished
Oct  9 09:59:21 compute-0 systemd[1]: libpod-22619b4eb7b37897b61b456666857533b7e6af5f67058f38005b51ca8fc13a16.scope: Deactivated successfully.
Oct  9 09:59:21 compute-0 systemd[1]: libpod-22619b4eb7b37897b61b456666857533b7e6af5f67058f38005b51ca8fc13a16.scope: Consumed 1.059s CPU time.
Oct  9 09:59:21 compute-0 podman[197323]: 2025-10-09 09:59:21.118097294 +0000 UTC m=+0.732483470 container died 22619b4eb7b37897b61b456666857533b7e6af5f67058f38005b51ca8fc13a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 09:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8078d9cf5f127afb35fe9fb2b56bd621a342777d1c5d630e3e901f27ac07c041-merged.mount: Deactivated successfully.
Oct  9 09:59:21 compute-0 podman[197323]: 2025-10-09 09:59:21.147824742 +0000 UTC m=+0.762210919 container remove 22619b4eb7b37897b61b456666857533b7e6af5f67058f38005b51ca8fc13a16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:59:21 compute-0 systemd[1]: libpod-conmon-22619b4eb7b37897b61b456666857533b7e6af5f67058f38005b51ca8fc13a16.scope: Deactivated successfully.
Oct  9 09:59:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 09:59:21 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 09:59:21 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v787: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 80 op/s
Oct  9 09:59:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:21.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:21 compute-0 nova_compute[187439]: 2025-10-09 09:59:21.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:21.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:22] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:59:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:22] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:59:22 compute-0 nova_compute[187439]: 2025-10-09 09:59:22.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v788: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 80 op/s
Oct  9 09:59:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:23.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:23.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v789: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 80 op/s
Oct  9 09:59:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:25.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:25 compute-0 podman[197454]: 2025-10-09 09:59:25.642803137 +0000 UTC m=+0.073361749 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  9 09:59:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:25.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:26 compute-0 ovn_controller[83056]: 2025-10-09T09:59:26Z|00049|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct  9 09:59:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:26 compute-0 nova_compute[187439]: 2025-10-09 09:59:26.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:27.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:27.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:27.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:27.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v790: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 316 KiB/s rd, 2.5 MiB/s wr, 71 op/s
Oct  9 09:59:27 compute-0 nova_compute[187439]: 2025-10-09 09:59:27.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:27.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:27.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:28.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:28.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:28.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:28.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v791: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  9 09:59:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:29.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:29 compute-0 podman[197474]: 2025-10-09 09:59:29.650658869 +0000 UTC m=+0.081553332 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 09:59:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:29.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v792: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  9 09:59:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:31.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:31 compute-0 nova_compute[187439]: 2025-10-09 09:59:31.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:31.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:32] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:59:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:32] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 09:59:32 compute-0 nova_compute[187439]: 2025-10-09 09:59:32.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v793: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  9 09:59:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:33.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:33.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:59:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:59:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v794: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  9 09:59:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:35.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:35.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:36 compute-0 nova_compute[187439]: 2025-10-09 09:59:36.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:37.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:37.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:37.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:37.076Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v795: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  9 09:59:37 compute-0 nova_compute[187439]: 2025-10-09 09:59:37.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:37.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:37.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:38.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:38.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:38.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:38.912Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v796: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct  9 09:59:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:39.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:39 compute-0 podman[197527]: 2025-10-09 09:59:39.628894687 +0000 UTC m=+0.065907795 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:59:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:39.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v797: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct  9 09:59:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:41.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:41 compute-0 nova_compute[187439]: 2025-10-09 09:59:41.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:41.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:42] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 09:59:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:42] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 09:59:42 compute-0 nova_compute[187439]: 2025-10-09 09:59:42.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v798: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Oct  9 09:59:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:43.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:43.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v799: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Oct  9 09:59:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:45.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:45.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.379920) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986379954, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1414, "num_deletes": 250, "total_data_size": 2634059, "memory_usage": 2687976, "flush_reason": "Manual Compaction"}
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986384770, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1603308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22037, "largest_seqno": 23450, "table_properties": {"data_size": 1598171, "index_size": 2405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13335, "raw_average_key_size": 20, "raw_value_size": 1586997, "raw_average_value_size": 2456, "num_data_blocks": 104, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003856, "oldest_key_time": 1760003856, "file_creation_time": 1760003986, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 4876 microseconds, and 3652 cpu microseconds.
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.384797) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1603308 bytes OK
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.384809) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.385405) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.385415) EVENT_LOG_v1 {"time_micros": 1760003986385412, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.385424) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2627943, prev total WAL file size 2627943, number of live WAL files 2.
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386162) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1565KB)], [47(13MB)]
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986386194, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16204103, "oldest_snapshot_seqno": -1}
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5587 keys, 13065311 bytes, temperature: kUnknown
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986425943, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 13065311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13028473, "index_size": 21752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 140406, "raw_average_key_size": 25, "raw_value_size": 12927653, "raw_average_value_size": 2313, "num_data_blocks": 891, "num_entries": 5587, "num_filter_entries": 5587, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760003986, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426123) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 13065311 bytes
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 407.2 rd, 328.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.9 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(18.3) write-amplify(8.1) OK, records in: 6045, records dropped: 458 output_compression: NoCompression
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426523) EVENT_LOG_v1 {"time_micros": 1760003986426516, "job": 24, "event": "compaction_finished", "compaction_time_micros": 39795, "compaction_time_cpu_micros": 23534, "output_level": 6, "num_output_files": 1, "total_output_size": 13065311, "num_input_records": 6045, "num_output_records": 5587, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986426777, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986428523, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.428552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.428556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.428557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.428558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-09:59:46.428559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-0 nova_compute[187439]: 2025-10-09 09:59:46.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:47.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:47.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:47.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:47.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v800: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct  9 09:59:47 compute-0 nova_compute[187439]: 2025-10-09 09:59:47.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:47.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:47.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:48.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:48.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:48.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:48.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v801: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 09:59:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:49.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_09:59:49
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.nfs', 'volumes', 'vms', 'default.rgw.log', 'backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 09:59:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 09:59:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 09:59:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 09:59:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:49.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:50 compute-0 podman[197562]: 2025-10-09 09:59:50.614986946 +0000 UTC m=+0.052120279 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  9 09:59:51 compute-0 nova_compute[187439]: 2025-10-09 09:59:51.245 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v802: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct  9 09:59:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:51.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:51 compute-0 nova_compute[187439]: 2025-10-09 09:59:51.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:51.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:52] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 09:59:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:09:59:52] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 09:59:52 compute-0 nova_compute[187439]: 2025-10-09 09:59:52.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v803: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  9 09:59:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:53.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:53.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.245 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.269 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.269 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.269 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.269 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.270 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v804: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  9 09:59:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:55.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:59:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1814886035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.641 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.858 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.860 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4707MB free_disk=59.92177200317383GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.860 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.861 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.905 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.906 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:59:55 compute-0 nova_compute[187439]: 2025-10-09 09:59:55.919 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:55.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:59:56 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956790210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:59:56 compute-0 nova_compute[187439]: 2025-10-09 09:59:56.274 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:56 compute-0 nova_compute[187439]: 2025-10-09 09:59:56.279 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:59:56 compute-0 nova_compute[187439]: 2025-10-09 09:59:56.292 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:59:56 compute-0 nova_compute[187439]: 2025-10-09 09:59:56.294 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:59:56 compute-0 nova_compute[187439]: 2025-10-09 09:59:56.294 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:56 compute-0 podman[197654]: 2025-10-09 09:59:56.619528191 +0000 UTC m=+0.051852985 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  9 09:59:56 compute-0 nova_compute[187439]: 2025-10-09 09:59:56.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:57.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:57.078Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:57.078Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:57.079Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:57 compute-0 nova_compute[187439]: 2025-10-09 09:59:57.294 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:57 compute-0 nova_compute[187439]: 2025-10-09 09:59:57.294 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:57 compute-0 nova_compute[187439]: 2025-10-09 09:59:57.295 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:59:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v805: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct  9 09:59:57 compute-0 nova_compute[187439]: 2025-10-09 09:59:57.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:57.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:57.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:58 compute-0 nova_compute[187439]: 2025-10-09 09:59:58.242 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:58 compute-0 nova_compute[187439]: 2025-10-09 09:59:58.245 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:58 compute-0 nova_compute[187439]: 2025-10-09 09:59:58.245 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:59:58 compute-0 nova_compute[187439]: 2025-10-09 09:59:58.245 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:59:58 compute-0 nova_compute[187439]: 2025-10-09 09:59:58.262 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:59:58 compute-0 nova_compute[187439]: 2025-10-09 09:59:58.262 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:58.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:58.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:58.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T09:59:58.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001109212716513642 of space, bias 1.0, pg target 0.33276381495409263 quantized to 32 (current 32)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 09:59:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v806: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 75 op/s
Oct  9 09:59:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:59.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 09:59:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:59.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:00 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 10:00:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 09:59:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:00 compute-0 ceph-mon[4497]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 10:00:00 compute-0 systemd[1]: Starting system activity accounting tool...
Oct  9 10:00:00 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 10:00:00 compute-0 systemd[1]: Finished system activity accounting tool.
Oct  9 10:00:00 compute-0 podman[197674]: 2025-10-09 10:00:00.612892473 +0000 UTC m=+0.050045722 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:00:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v807: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct  9 10:00:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:01.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:01 compute-0 nova_compute[187439]: 2025-10-09 10:00:01.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:02] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:00:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:02] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:00:02 compute-0 nova_compute[187439]: 2025-10-09 10:00:02.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v808: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  9 10:00:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:03.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:03.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:00:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:00:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v809: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  9 10:00:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:05.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:05.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:06 compute-0 nova_compute[187439]: 2025-10-09 10:00:06.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:07.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:07.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:07.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:07.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:07 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v810: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  9 10:00:07 compute-0 nova_compute[187439]: 2025-10-09 10:00:07.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:07.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:07.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:08.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:08.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:08.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:08.915Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:09 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v811: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  9 10:00:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:09.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:00:10.114 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:00:10.114 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:00:10.114 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:10 compute-0 podman[197702]: 2025-10-09 10:00:10.643107835 +0000 UTC m=+0.072657784 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  9 10:00:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v812: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct  9 10:00:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:11.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:11 compute-0 nova_compute[187439]: 2025-10-09 10:00:11.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:00:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2285839819' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:00:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:00:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2285839819' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:00:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:11.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:12] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Oct  9 10:00:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:12] "GET /metrics HTTP/1.1" 200 48542 "" "Prometheus/2.51.0"
Oct  9 10:00:12 compute-0 nova_compute[187439]: 2025-10-09 10:00:12.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v813: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 21 KiB/s wr, 3 op/s
Oct  9 10:00:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:13.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:13.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v814: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 21 KiB/s wr, 3 op/s
Oct  9 10:00:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:15.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:15.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:16 compute-0 nova_compute[187439]: 2025-10-09 10:00:16.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:17.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:17.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:17.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:17.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v815: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 27 KiB/s wr, 4 op/s
Oct  9 10:00:17 compute-0 nova_compute[187439]: 2025-10-09 10:00:17.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:17.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:17.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:18.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:18.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:18.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:18.911Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v816: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 2 op/s
Oct  9 10:00:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:19.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:00:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:00:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:00:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:00:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:00:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:00:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:00:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:00:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:19.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.393080) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021393130, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 553, "num_deletes": 257, "total_data_size": 617911, "memory_usage": 629912, "flush_reason": "Manual Compaction"}
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct  9 10:00:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v817: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 18 KiB/s wr, 3 op/s
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021397445, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 609209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23451, "largest_seqno": 24003, "table_properties": {"data_size": 606288, "index_size": 893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6684, "raw_average_key_size": 17, "raw_value_size": 600364, "raw_average_value_size": 1592, "num_data_blocks": 41, "num_entries": 377, "num_filter_entries": 377, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003987, "oldest_key_time": 1760003987, "file_creation_time": 1760004021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 4379 microseconds, and 3183 cpu microseconds.
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.397467) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 609209 bytes OK
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.397481) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.397954) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.397965) EVENT_LOG_v1 {"time_micros": 1760004021397962, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.397978) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 614827, prev total WAL file size 614827, number of live WAL files 2.
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398555) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(594KB)], [50(12MB)]
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021398581, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13674520, "oldest_snapshot_seqno": -1}
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5442 keys, 13531148 bytes, temperature: kUnknown
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021439006, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13531148, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13494426, "index_size": 22020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 138599, "raw_average_key_size": 25, "raw_value_size": 13395302, "raw_average_value_size": 2461, "num_data_blocks": 899, "num_entries": 5442, "num_filter_entries": 5442, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760004021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.439295) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13531148 bytes
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.439771) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 337.4 rd, 333.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.5 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(44.7) write-amplify(22.2) OK, records in: 5964, records dropped: 522 output_compression: NoCompression
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.439787) EVENT_LOG_v1 {"time_micros": 1760004021439780, "job": 26, "event": "compaction_finished", "compaction_time_micros": 40525, "compaction_time_cpu_micros": 24273, "output_level": 6, "num_output_files": 1, "total_output_size": 13531148, "num_input_records": 5964, "num_output_records": 5442, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021440100, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021441777, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.441902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.441911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.441913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.441914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:00:21.441917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:21.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:21 compute-0 podman[197784]: 2025-10-09 10:00:21.555227115 +0000 UTC m=+0.078400705 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  9 10:00:21 compute-0 nova_compute[187439]: 2025-10-09 10:00:21.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:21.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:00:22 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:00:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:00:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:00:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v818: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 9.8 KiB/s wr, 2 op/s
Oct  9 10:00:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:00:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 10:00:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:00:22 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:00:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:00:22 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:00:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:00:22 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:00:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:22] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Oct  9 10:00:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:22] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Oct  9 10:00:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:00:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:22 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:00:22 compute-0 nova_compute[187439]: 2025-10-09 10:00:22.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:22 compute-0 podman[197939]: 2025-10-09 10:00:22.522689129 +0000 UTC m=+0.035564348 container create a1a3986e9beac47b940f650da36ce3c0739a1f33ce8c10f7aeb7309e72a87adb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  9 10:00:22 compute-0 systemd[1]: Started libpod-conmon-a1a3986e9beac47b940f650da36ce3c0739a1f33ce8c10f7aeb7309e72a87adb.scope.
Oct  9 10:00:22 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:00:22 compute-0 podman[197939]: 2025-10-09 10:00:22.584885904 +0000 UTC m=+0.097761124 container init a1a3986e9beac47b940f650da36ce3c0739a1f33ce8c10f7aeb7309e72a87adb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 10:00:22 compute-0 podman[197939]: 2025-10-09 10:00:22.59029372 +0000 UTC m=+0.103168938 container start a1a3986e9beac47b940f650da36ce3c0739a1f33ce8c10f7aeb7309e72a87adb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 10:00:22 compute-0 podman[197939]: 2025-10-09 10:00:22.591728253 +0000 UTC m=+0.104603482 container attach a1a3986e9beac47b940f650da36ce3c0739a1f33ce8c10f7aeb7309e72a87adb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:00:22 compute-0 infallible_khayyam[197954]: 167 167
Oct  9 10:00:22 compute-0 systemd[1]: libpod-a1a3986e9beac47b940f650da36ce3c0739a1f33ce8c10f7aeb7309e72a87adb.scope: Deactivated successfully.
Oct  9 10:00:22 compute-0 podman[197939]: 2025-10-09 10:00:22.596865989 +0000 UTC m=+0.109741208 container died a1a3986e9beac47b940f650da36ce3c0739a1f33ce8c10f7aeb7309e72a87adb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 10:00:22 compute-0 podman[197939]: 2025-10-09 10:00:22.509048374 +0000 UTC m=+0.021923613 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f9c932a738b1e1df7bb12073e3150e06d634d07ae622e02480d958272f0b1d4-merged.mount: Deactivated successfully.
Oct  9 10:00:22 compute-0 podman[197939]: 2025-10-09 10:00:22.618985821 +0000 UTC m=+0.131861040 container remove a1a3986e9beac47b940f650da36ce3c0739a1f33ce8c10f7aeb7309e72a87adb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:00:22 compute-0 systemd[1]: libpod-conmon-a1a3986e9beac47b940f650da36ce3c0739a1f33ce8c10f7aeb7309e72a87adb.scope: Deactivated successfully.
Oct  9 10:00:22 compute-0 podman[197978]: 2025-10-09 10:00:22.769239643 +0000 UTC m=+0.033720923 container create fa6699d255819461877b0ef6b106e32e6c34d5bc580f66bd50afaf4b848089c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 10:00:22 compute-0 systemd[1]: Started libpod-conmon-fa6699d255819461877b0ef6b106e32e6c34d5bc580f66bd50afaf4b848089c8.scope.
Oct  9 10:00:22 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be7a12d3252da930a841f22b693632c80cfbf1589b5eabd7402a36e3dc8922e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be7a12d3252da930a841f22b693632c80cfbf1589b5eabd7402a36e3dc8922e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be7a12d3252da930a841f22b693632c80cfbf1589b5eabd7402a36e3dc8922e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be7a12d3252da930a841f22b693632c80cfbf1589b5eabd7402a36e3dc8922e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be7a12d3252da930a841f22b693632c80cfbf1589b5eabd7402a36e3dc8922e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:22 compute-0 podman[197978]: 2025-10-09 10:00:22.839575762 +0000 UTC m=+0.104057032 container init fa6699d255819461877b0ef6b106e32e6c34d5bc580f66bd50afaf4b848089c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 10:00:22 compute-0 podman[197978]: 2025-10-09 10:00:22.845272532 +0000 UTC m=+0.109753802 container start fa6699d255819461877b0ef6b106e32e6c34d5bc580f66bd50afaf4b848089c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mayer, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 10:00:22 compute-0 podman[197978]: 2025-10-09 10:00:22.84813608 +0000 UTC m=+0.112617350 container attach fa6699d255819461877b0ef6b106e32e6c34d5bc580f66bd50afaf4b848089c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 10:00:22 compute-0 podman[197978]: 2025-10-09 10:00:22.755778415 +0000 UTC m=+0.020259705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:00:23 compute-0 goofy_mayer[197991]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:00:23 compute-0 goofy_mayer[197991]: --> All data devices are unavailable
Oct  9 10:00:23 compute-0 systemd[1]: libpod-fa6699d255819461877b0ef6b106e32e6c34d5bc580f66bd50afaf4b848089c8.scope: Deactivated successfully.
Oct  9 10:00:23 compute-0 podman[197978]: 2025-10-09 10:00:23.148187986 +0000 UTC m=+0.412669256 container died fa6699d255819461877b0ef6b106e32e6c34d5bc580f66bd50afaf4b848089c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  9 10:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0be7a12d3252da930a841f22b693632c80cfbf1589b5eabd7402a36e3dc8922e-merged.mount: Deactivated successfully.
Oct  9 10:00:23 compute-0 podman[197978]: 2025-10-09 10:00:23.17562483 +0000 UTC m=+0.440106101 container remove fa6699d255819461877b0ef6b106e32e6c34d5bc580f66bd50afaf4b848089c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 10:00:23 compute-0 systemd[1]: libpod-conmon-fa6699d255819461877b0ef6b106e32e6c34d5bc580f66bd50afaf4b848089c8.scope: Deactivated successfully.
Oct  9 10:00:23 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  9 10:00:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:23.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:23 compute-0 podman[198100]: 2025-10-09 10:00:23.675629223 +0000 UTC m=+0.035880594 container create fbd8cbdead138d6aebeb9655c5370a7456eaf08d23e4ae16cc4a6a898c5fef9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:00:23 compute-0 systemd[1]: Started libpod-conmon-fbd8cbdead138d6aebeb9655c5370a7456eaf08d23e4ae16cc4a6a898c5fef9c.scope.
Oct  9 10:00:23 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:00:23 compute-0 podman[198100]: 2025-10-09 10:00:23.741567992 +0000 UTC m=+0.101819373 container init fbd8cbdead138d6aebeb9655c5370a7456eaf08d23e4ae16cc4a6a898c5fef9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_nobel, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:00:23 compute-0 podman[198100]: 2025-10-09 10:00:23.747429231 +0000 UTC m=+0.107680593 container start fbd8cbdead138d6aebeb9655c5370a7456eaf08d23e4ae16cc4a6a898c5fef9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 10:00:23 compute-0 podman[198100]: 2025-10-09 10:00:23.748798222 +0000 UTC m=+0.109049584 container attach fbd8cbdead138d6aebeb9655c5370a7456eaf08d23e4ae16cc4a6a898c5fef9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  9 10:00:23 compute-0 admiring_nobel[198113]: 167 167
Oct  9 10:00:23 compute-0 systemd[1]: libpod-fbd8cbdead138d6aebeb9655c5370a7456eaf08d23e4ae16cc4a6a898c5fef9c.scope: Deactivated successfully.
Oct  9 10:00:23 compute-0 podman[198100]: 2025-10-09 10:00:23.752792743 +0000 UTC m=+0.113044104 container died fbd8cbdead138d6aebeb9655c5370a7456eaf08d23e4ae16cc4a6a898c5fef9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_nobel, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:00:23 compute-0 podman[198100]: 2025-10-09 10:00:23.662937345 +0000 UTC m=+0.023188716 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e0bdc4634446e870dbc7677f51507d690c06194cf379d4a50eb57b626e08649-merged.mount: Deactivated successfully.
Oct  9 10:00:23 compute-0 podman[198100]: 2025-10-09 10:00:23.77391279 +0000 UTC m=+0.134164150 container remove fbd8cbdead138d6aebeb9655c5370a7456eaf08d23e4ae16cc4a6a898c5fef9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 10:00:23 compute-0 systemd[1]: libpod-conmon-fbd8cbdead138d6aebeb9655c5370a7456eaf08d23e4ae16cc4a6a898c5fef9c.scope: Deactivated successfully.
Oct  9 10:00:23 compute-0 podman[198136]: 2025-10-09 10:00:23.933127855 +0000 UTC m=+0.042633967 container create 9b942484486ca74353e7ae890dea0f1a85760db7fdf7973c180c096fb780766b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pike, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 10:00:23 compute-0 systemd[1]: Started libpod-conmon-9b942484486ca74353e7ae890dea0f1a85760db7fdf7973c180c096fb780766b.scope.
Oct  9 10:00:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:23.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:23 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a91ee18fda9e0d8e4b1f13f8229101ed96818441f283278265237bd701b00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a91ee18fda9e0d8e4b1f13f8229101ed96818441f283278265237bd701b00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a91ee18fda9e0d8e4b1f13f8229101ed96818441f283278265237bd701b00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a91ee18fda9e0d8e4b1f13f8229101ed96818441f283278265237bd701b00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:23 compute-0 podman[198136]: 2025-10-09 10:00:23.996106904 +0000 UTC m=+0.105613016 container init 9b942484486ca74353e7ae890dea0f1a85760db7fdf7973c180c096fb780766b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pike, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:00:24 compute-0 podman[198136]: 2025-10-09 10:00:24.002380412 +0000 UTC m=+0.111886523 container start 9b942484486ca74353e7ae890dea0f1a85760db7fdf7973c180c096fb780766b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pike, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 10:00:24 compute-0 podman[198136]: 2025-10-09 10:00:24.003452984 +0000 UTC m=+0.112959095 container attach 9b942484486ca74353e7ae890dea0f1a85760db7fdf7973c180c096fb780766b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:00:24 compute-0 podman[198136]: 2025-10-09 10:00:23.92106277 +0000 UTC m=+0.030568901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:00:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v819: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 9.8 KiB/s wr, 2 op/s
Oct  9 10:00:24 compute-0 nervous_pike[198149]: {
Oct  9 10:00:24 compute-0 nervous_pike[198149]:    "1": [
Oct  9 10:00:24 compute-0 nervous_pike[198149]:        {
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "devices": [
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "/dev/loop3"
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            ],
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "lv_name": "ceph_lv0",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "lv_size": "21470642176",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "name": "ceph_lv0",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "tags": {
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.cluster_name": "ceph",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.crush_device_class": "",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.encrypted": "0",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.osd_id": "1",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.type": "block",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.vdo": "0",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:                "ceph.with_tpm": "0"
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            },
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "type": "block",
Oct  9 10:00:24 compute-0 nervous_pike[198149]:            "vg_name": "ceph_vg0"
Oct  9 10:00:24 compute-0 nervous_pike[198149]:        }
Oct  9 10:00:24 compute-0 nervous_pike[198149]:    ]
Oct  9 10:00:24 compute-0 nervous_pike[198149]: }
Oct  9 10:00:24 compute-0 systemd[1]: libpod-9b942484486ca74353e7ae890dea0f1a85760db7fdf7973c180c096fb780766b.scope: Deactivated successfully.
Oct  9 10:00:24 compute-0 podman[198136]: 2025-10-09 10:00:24.275284007 +0000 UTC m=+0.384790118 container died 9b942484486ca74353e7ae890dea0f1a85760db7fdf7973c180c096fb780766b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pike, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a8a91ee18fda9e0d8e4b1f13f8229101ed96818441f283278265237bd701b00-merged.mount: Deactivated successfully.
Oct  9 10:00:24 compute-0 podman[198136]: 2025-10-09 10:00:24.310822065 +0000 UTC m=+0.420328175 container remove 9b942484486ca74353e7ae890dea0f1a85760db7fdf7973c180c096fb780766b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pike, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 10:00:24 compute-0 systemd[1]: libpod-conmon-9b942484486ca74353e7ae890dea0f1a85760db7fdf7973c180c096fb780766b.scope: Deactivated successfully.
Oct  9 10:00:24 compute-0 podman[198250]: 2025-10-09 10:00:24.801589622 +0000 UTC m=+0.032092753 container create d94a378df369870bffe414cde3747b1122c063ba593ed6f5108ff39bca2550af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:00:24 compute-0 systemd[1]: Started libpod-conmon-d94a378df369870bffe414cde3747b1122c063ba593ed6f5108ff39bca2550af.scope.
Oct  9 10:00:24 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:00:24 compute-0 podman[198250]: 2025-10-09 10:00:24.869128949 +0000 UTC m=+0.099632091 container init d94a378df369870bffe414cde3747b1122c063ba593ed6f5108ff39bca2550af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 10:00:24 compute-0 podman[198250]: 2025-10-09 10:00:24.875716599 +0000 UTC m=+0.106219710 container start d94a378df369870bffe414cde3747b1122c063ba593ed6f5108ff39bca2550af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 10:00:24 compute-0 podman[198250]: 2025-10-09 10:00:24.876838103 +0000 UTC m=+0.107341224 container attach d94a378df369870bffe414cde3747b1122c063ba593ed6f5108ff39bca2550af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goldberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:00:24 compute-0 vigilant_goldberg[198264]: 167 167
Oct  9 10:00:24 compute-0 systemd[1]: libpod-d94a378df369870bffe414cde3747b1122c063ba593ed6f5108ff39bca2550af.scope: Deactivated successfully.
Oct  9 10:00:24 compute-0 podman[198250]: 2025-10-09 10:00:24.880437077 +0000 UTC m=+0.110940228 container died d94a378df369870bffe414cde3747b1122c063ba593ed6f5108ff39bca2550af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goldberg, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:00:24 compute-0 podman[198250]: 2025-10-09 10:00:24.790466994 +0000 UTC m=+0.020970144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff638c1f6a4230ffdf60fd849a444d017a86e78de09b38acd7cc8fe51f5f8165-merged.mount: Deactivated successfully.
Oct  9 10:00:24 compute-0 podman[198250]: 2025-10-09 10:00:24.901782248 +0000 UTC m=+0.132285369 container remove d94a378df369870bffe414cde3747b1122c063ba593ed6f5108ff39bca2550af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goldberg, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 10:00:24 compute-0 systemd[1]: libpod-conmon-d94a378df369870bffe414cde3747b1122c063ba593ed6f5108ff39bca2550af.scope: Deactivated successfully.
Oct  9 10:00:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:25 compute-0 podman[198285]: 2025-10-09 10:00:25.047701178 +0000 UTC m=+0.035945948 container create 849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:00:25 compute-0 systemd[1]: Started libpod-conmon-849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd.scope.
Oct  9 10:00:25 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd24bc3d5b1dafb2de2a52a30e5596f1cb3a84f8dcb695a46d643c795802a23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd24bc3d5b1dafb2de2a52a30e5596f1cb3a84f8dcb695a46d643c795802a23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd24bc3d5b1dafb2de2a52a30e5596f1cb3a84f8dcb695a46d643c795802a23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd24bc3d5b1dafb2de2a52a30e5596f1cb3a84f8dcb695a46d643c795802a23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:00:25 compute-0 podman[198285]: 2025-10-09 10:00:25.117573584 +0000 UTC m=+0.105818364 container init 849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_newton, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 10:00:25 compute-0 podman[198285]: 2025-10-09 10:00:25.123265775 +0000 UTC m=+0.111510545 container start 849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_newton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:00:25 compute-0 podman[198285]: 2025-10-09 10:00:25.12432443 +0000 UTC m=+0.112569200 container attach 849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 10:00:25 compute-0 podman[198285]: 2025-10-09 10:00:25.034010358 +0000 UTC m=+0.022255148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:00:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:25.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:25 compute-0 suspicious_newton[198298]: {}
Oct  9 10:00:25 compute-0 lvm[198375]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:00:25 compute-0 lvm[198375]: VG ceph_vg0 finished
Oct  9 10:00:25 compute-0 systemd[1]: libpod-849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd.scope: Deactivated successfully.
Oct  9 10:00:25 compute-0 systemd[1]: libpod-849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd.scope: Consumed 1.007s CPU time.
Oct  9 10:00:25 compute-0 conmon[198298]: conmon 849a15429113dd76772f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd.scope/container/memory.events
Oct  9 10:00:25 compute-0 podman[198285]: 2025-10-09 10:00:25.722057252 +0000 UTC m=+0.710302022 container died 849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 10:00:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dd24bc3d5b1dafb2de2a52a30e5596f1cb3a84f8dcb695a46d643c795802a23-merged.mount: Deactivated successfully.
Oct  9 10:00:25 compute-0 podman[198285]: 2025-10-09 10:00:25.748972112 +0000 UTC m=+0.737216882 container remove 849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 10:00:25 compute-0 systemd[1]: libpod-conmon-849a15429113dd76772f536d804fa67792dc4d11160d5d781567573f2896b4dd.scope: Deactivated successfully.
Oct  9 10:00:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:00:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:00:25 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:25.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v820: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 11 KiB/s wr, 2 op/s
Oct  9 10:00:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:26 compute-0 nova_compute[187439]: 2025-10-09 10:00:26.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:26 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:26 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:27.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:27 compute-0 nova_compute[187439]: 2025-10-09 10:00:27.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:27.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:27 compute-0 podman[198413]: 2025-10-09 10:00:27.608924453 +0000 UTC m=+0.046197041 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 10:00:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:27.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v821: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s
Oct  9 10:00:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:28.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:28.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:28.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:28.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:29.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:29.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v822: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s
Oct  9 10:00:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:31.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:31 compute-0 podman[198433]: 2025-10-09 10:00:31.605661646 +0000 UTC m=+0.041862020 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001)
Oct  9 10:00:31 compute-0 nova_compute[187439]: 2025-10-09 10:00:31.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:31.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v823: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 4.9 KiB/s wr, 2 op/s
Oct  9 10:00:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:32] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Oct  9 10:00:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:32] "GET /metrics HTTP/1.1" 200 48541 "" "Prometheus/2.51.0"
Oct  9 10:00:32 compute-0 nova_compute[187439]: 2025-10-09 10:00:32.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:33.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:33.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v824: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct  9 10:00:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:00:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:00:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:35.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.003000030s ======
Oct  9 10:00:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:35.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000030s
Oct  9 10:00:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v825: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 6.3 KiB/s wr, 2 op/s
Oct  9 10:00:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:36 compute-0 nova_compute[187439]: 2025-10-09 10:00:36.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:37.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:37.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:37.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:37.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:37 compute-0 nova_compute[187439]: 2025-10-09 10:00:37.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:37.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:37.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v826: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 5.3 KiB/s wr, 1 op/s
Oct  9 10:00:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:38.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:38.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:38.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:38.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:39.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:39.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:40 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v827: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 5.3 KiB/s wr, 1 op/s
Oct  9 10:00:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:41.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:41 compute-0 podman[198485]: 2025-10-09 10:00:41.632827823 +0000 UTC m=+0.069157268 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  9 10:00:41 compute-0 nova_compute[187439]: 2025-10-09 10:00:41.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:41.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v828: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 6.3 KiB/s wr, 2 op/s
Oct  9 10:00:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:42] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 10:00:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:42] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Oct  9 10:00:42 compute-0 nova_compute[187439]: 2025-10-09 10:00:42.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:43.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:43.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v829: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s
Oct  9 10:00:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:45 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:45.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:46.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v830: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 4.7 KiB/s wr, 2 op/s
Oct  9 10:00:46 compute-0 nova_compute[187439]: 2025-10-09 10:00:46.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:46 compute-0 nova_compute[187439]: 2025-10-09 10:00:46.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:47.074Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:47.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:47.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:47.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:47 compute-0 nova_compute[187439]: 2025-10-09 10:00:47.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:47 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:00:47.450 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:00:47 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:00:47.450 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:00:47 compute-0 nova_compute[187439]: 2025-10-09 10:00:47.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:47.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:48.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v831: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.7 KiB/s wr, 1 op/s
Oct  9 10:00:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:48.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:48.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:48.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:48.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:00:49
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['.nfs', 'vms', 'cephfs.cephfs.data', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.meta']
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:00:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:00:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:00:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:49.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:00:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:00:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:50 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:50.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v832: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.7 KiB/s wr, 1 op/s
Oct  9 10:00:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:51.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:51 compute-0 nova_compute[187439]: 2025-10-09 10:00:51.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:52.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v833: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.8 KiB/s wr, 29 op/s
Oct  9 10:00:52 compute-0 nova_compute[187439]: 2025-10-09 10:00:52.257 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:52] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 10:00:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:00:52] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 10:00:52 compute-0 nova_compute[187439]: 2025-10-09 10:00:52.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-0 podman[198543]: 2025-10-09 10:00:52.580586122 +0000 UTC m=+0.071491768 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:00:53 compute-0 nova_compute[187439]: 2025-10-09 10:00:53.243 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:53.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:54.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v834: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct  9 10:00:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:55 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.263 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.263 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.263 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.264 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.264 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:00:55 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:00:55.452 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:00:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:55.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:00:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252183875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.626 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.851 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.852 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4718MB free_disk=59.942501068115234GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.852 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.853 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.936 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.936 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:00:55 compute-0 nova_compute[187439]: 2025-10-09 10:00:55.971 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Refreshing inventories for resource provider f97cf330-2912-473f-81a8-cda2f8811838 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.011 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Updating ProviderTree inventory for provider f97cf330-2912-473f-81a8-cda2f8811838 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.011 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Updating inventory in ProviderTree for provider f97cf330-2912-473f-81a8-cda2f8811838 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 10:00:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:56.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.022 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Refreshing aggregate associations for resource provider f97cf330-2912-473f-81a8-cda2f8811838, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.036 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Refreshing trait associations for resource provider f97cf330-2912-473f-81a8-cda2f8811838, traits: HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX2,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX512VAES,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSSE3,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.054 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:00:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.0 KiB/s wr, 57 op/s
Oct  9 10:00:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:00:56.182 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:89:5b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed655dd9-bb73-453e-8a8b-a0dd965263b3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=188102c6-f5ba-4733-92be-2659db7ae55a) old=Port_Binding(mac=['fa:16:3e:77:89:5b 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:00:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:00:56.185 92053 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 188102c6-f5ba-4733-92be-2659db7ae55a in datapath ab21f371-26e2-4c4f-bba0-3c44fb308723 updated#033[00m
Oct  9 10:00:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:00:56.186 92053 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ab21f371-26e2-4c4f-bba0-3c44fb308723 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  9 10:00:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:00:56.190 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[83d5acc9-d512-479c-a7a0-7e14a7c076a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:00:56 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3594715963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.421 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.425 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.438 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.440 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.440 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.441 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.441 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  9 10:00:56 compute-0 nova_compute[187439]: 2025-10-09 10:00:56.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:57.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:57.229Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:57.229Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:57.229Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:57 compute-0 nova_compute[187439]: 2025-10-09 10:00:57.450 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:57 compute-0 nova_compute[187439]: 2025-10-09 10:00:57.450 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:57 compute-0 nova_compute[187439]: 2025-10-09 10:00:57.450 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:00:57 compute-0 nova_compute[187439]: 2025-10-09 10:00:57.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:57.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:58.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v836: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Oct  9 10:00:58 compute-0 nova_compute[187439]: 2025-10-09 10:00:58.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:58 compute-0 nova_compute[187439]: 2025-10-09 10:00:58.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:58 compute-0 nova_compute[187439]: 2025-10-09 10:00:58.247 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  9 10:00:58 compute-0 nova_compute[187439]: 2025-10-09 10:00:58.259 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  9 10:00:58 compute-0 podman[198612]: 2025-10-09 10:00:58.598720436 +0000 UTC m=+0.039777019 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  9 10:00:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:58.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:58.921Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:58.921Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:00:58.922Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:00:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:00:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:59 compute-0 nova_compute[187439]: 2025-10-09 10:00:59.259 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:59 compute-0 nova_compute[187439]: 2025-10-09 10:00:59.259 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:00:59 compute-0 nova_compute[187439]: 2025-10-09 10:00:59.260 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:00:59 compute-0 nova_compute[187439]: 2025-10-09 10:00:59.272 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:00:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:00:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:00:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:59.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:00.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v837: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Oct  9 10:01:00 compute-0 nova_compute[187439]: 2025-10-09 10:01:00.256 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:01.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:01 compute-0 nova_compute[187439]: 2025-10-09 10:01:01.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:01 compute-0 nova_compute[187439]: 2025-10-09 10:01:01.937 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:02.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v838: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.3 KiB/s wr, 57 op/s
Oct  9 10:01:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:02] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 10:01:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:02] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 10:01:02 compute-0 nova_compute[187439]: 2025-10-09 10:01:02.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:02 compute-0 podman[198643]: 2025-10-09 10:01:02.620696692 +0000 UTC m=+0.055342544 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:01:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:03.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:04.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v839: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  9 10:01:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:01:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:01:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:05.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:06.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v840: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct  9 10:01:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:06 compute-0 nova_compute[187439]: 2025-10-09 10:01:06.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:07.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:07.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:07.086Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:07.087Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:07 compute-0 nova_compute[187439]: 2025-10-09 10:01:07.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:07.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:01:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:08.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:01:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v841: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.245 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "4640d9c1-5670-4ad1-a4f3-488fb30df455" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.246 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.258 2 DEBUG nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.307841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068308337, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 669, "num_deletes": 251, "total_data_size": 945490, "memory_usage": 958504, "flush_reason": "Manual Compaction"}
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068312957, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 932885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24004, "largest_seqno": 24672, "table_properties": {"data_size": 929346, "index_size": 1383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8231, "raw_average_key_size": 19, "raw_value_size": 922210, "raw_average_value_size": 2190, "num_data_blocks": 61, "num_entries": 421, "num_filter_entries": 421, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004022, "oldest_key_time": 1760004022, "file_creation_time": 1760004068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 4715 microseconds, and 2943 cpu microseconds.
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.313011) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 932885 bytes OK
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.313036) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.313640) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.313652) EVENT_LOG_v1 {"time_micros": 1760004068313649, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.313665) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 942008, prev total WAL file size 942008, number of live WAL files 2.
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.314063) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(911KB)], [53(12MB)]
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068314087, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14464033, "oldest_snapshot_seqno": -1}
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.335 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.335 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.341 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.342 2 INFO nova.compute.claims [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5347 keys, 12339745 bytes, temperature: kUnknown
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068347651, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12339745, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12304651, "index_size": 20648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 137318, "raw_average_key_size": 25, "raw_value_size": 12208113, "raw_average_value_size": 2283, "num_data_blocks": 837, "num_entries": 5347, "num_filter_entries": 5347, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760004068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347820) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12339745 bytes
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.348272) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 430.4 rd, 367.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(28.7) write-amplify(13.2) OK, records in: 5863, records dropped: 516 output_compression: NoCompression
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.348286) EVENT_LOG_v1 {"time_micros": 1760004068348279, "job": 28, "event": "compaction_finished", "compaction_time_micros": 33609, "compaction_time_cpu_micros": 24375, "output_level": 6, "num_output_files": 1, "total_output_size": 12339745, "num_input_records": 5863, "num_output_records": 5347, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068348478, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068350572, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.313996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.412 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:08 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2949658230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.780 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.785 2 DEBUG nova.compute.provider_tree [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.797 2 DEBUG nova.scheduler.client.report [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.809 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.809 2 DEBUG nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.843 2 DEBUG nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.844 2 DEBUG nova.network.neutron [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.856 2 INFO nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.865 2 DEBUG nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 10:01:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:08.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:08.916Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:08.917Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:08.917Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.920 2 DEBUG nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.921 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.921 2 INFO nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Creating image(s)#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.944 2 DEBUG nova.storage.rbd_utils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.972 2 DEBUG nova.storage.rbd_utils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:08 compute-0 nova_compute[187439]: 2025-10-09 10:01:08.996 2 DEBUG nova.storage.rbd_utils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.001 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.074 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.075 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.076 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.076 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.100 2 DEBUG nova.storage.rbd_utils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.104 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.243 2 DEBUG nova.policy [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.282 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.341 2 DEBUG nova.storage.rbd_utils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.418 2 DEBUG nova.objects.instance [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid 4640d9c1-5670-4ad1-a4f3-488fb30df455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.442 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.442 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Ensure instance console log exists: /var/lib/nova/instances/4640d9c1-5670-4ad1-a4f3-488fb30df455/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.443 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.443 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:09 compute-0 nova_compute[187439]: 2025-10-09 10:01:09.444 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:09.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v842: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:01:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:10.113 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:10.114 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:10.114 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.204 2 DEBUG nova.network.neutron [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Successfully updated port: 24c642bf-d3e7-4003-97f5-0e43aca6db7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.214 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-4640d9c1-5670-4ad1-a4f3-488fb30df455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.214 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-4640d9c1-5670-4ad1-a4f3-488fb30df455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.214 2 DEBUG nova.network.neutron [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.277 2 DEBUG nova.compute.manager [req-750750c9-2933-4042-bf7c-e952ec423d8b req-3430ac11-3fa8-484e-b0db-96e83b47da41 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Received event network-changed-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.277 2 DEBUG nova.compute.manager [req-750750c9-2933-4042-bf7c-e952ec423d8b req-3430ac11-3fa8-484e-b0db-96e83b47da41 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Refreshing instance network info cache due to event network-changed-24c642bf-d3e7-4003-97f5-0e43aca6db7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.277 2 DEBUG oslo_concurrency.lockutils [req-750750c9-2933-4042-bf7c-e952ec423d8b req-3430ac11-3fa8-484e-b0db-96e83b47da41 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-4640d9c1-5670-4ad1-a4f3-488fb30df455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.342 2 DEBUG nova.network.neutron [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.904 2 DEBUG nova.network.neutron [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Updating instance_info_cache with network_info: [{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.915 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-4640d9c1-5670-4ad1-a4f3-488fb30df455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.915 2 DEBUG nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Instance network_info: |[{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.916 2 DEBUG oslo_concurrency.lockutils [req-750750c9-2933-4042-bf7c-e952ec423d8b req-3430ac11-3fa8-484e-b0db-96e83b47da41 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-4640d9c1-5670-4ad1-a4f3-488fb30df455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.916 2 DEBUG nova.network.neutron [req-750750c9-2933-4042-bf7c-e952ec423d8b req-3430ac11-3fa8-484e-b0db-96e83b47da41 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Refreshing network info cache for port 24c642bf-d3e7-4003-97f5-0e43aca6db7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.918 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Start _get_guest_xml network_info=[{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'encryption_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'guest_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.922 2 WARNING nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.927 2 DEBUG nova.virt.libvirt.host [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.928 2 DEBUG nova.virt.libvirt.host [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.930 2 DEBUG nova.virt.libvirt.host [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.930 2 DEBUG nova.virt.libvirt.host [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.930 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.931 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.931 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.931 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.931 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.931 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.931 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.932 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.932 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.932 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.932 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.932 2 DEBUG nova.virt.hardware [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 10:01:10 compute-0 nova_compute[187439]: 2025-10-09 10:01:10.934 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:01:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4140615067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.309 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.335 2 DEBUG nova.storage.rbd_utils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.338 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.592 2 DEBUG nova.network.neutron [req-750750c9-2933-4042-bf7c-e952ec423d8b req-3430ac11-3fa8-484e-b0db-96e83b47da41 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Updated VIF entry in instance network info cache for port 24c642bf-d3e7-4003-97f5-0e43aca6db7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.593 2 DEBUG nova.network.neutron [req-750750c9-2933-4042-bf7c-e952ec423d8b req-3430ac11-3fa8-484e-b0db-96e83b47da41 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Updating instance_info_cache with network_info: [{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.604 2 DEBUG oslo_concurrency.lockutils [req-750750c9-2933-4042-bf7c-e952ec423d8b req-3430ac11-3fa8-484e-b0db-96e83b47da41 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-4640d9c1-5670-4ad1-a4f3-488fb30df455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:01:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:11.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:01:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1742209419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.721 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.722 2 DEBUG nova.virt.libvirt.vif [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:01:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1925701042',display_name='tempest-TestNetworkBasicOps-server-1925701042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1925701042',id=8,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP5qxPoCVJd5VnANzq6gXzu8Qg3VPhJTeiwPxTw4MegyVVNhe0MLS0a5xNScn1jiWodD1exagc6TYLbTjhulbxBE5a8G/SpWx3o0pPaddfHf09aIr3WlCbNx5ag3JmOgEg==',key_name='tempest-TestNetworkBasicOps-743261970',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-das5r866',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:01:08Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=4640d9c1-5670-4ad1-a4f3-488fb30df455,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.723 2 DEBUG nova.network.os_vif_util [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.724 2 DEBUG nova.network.os_vif_util [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.724 2 DEBUG nova.objects.instance [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4640d9c1-5670-4ad1-a4f3-488fb30df455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.738 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] End _get_guest_xml xml=<domain type="kvm">
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <uuid>4640d9c1-5670-4ad1-a4f3-488fb30df455</uuid>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <name>instance-00000008</name>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <memory>131072</memory>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <vcpu>1</vcpu>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <metadata>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <nova:name>tempest-TestNetworkBasicOps-server-1925701042</nova:name>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <nova:creationTime>2025-10-09 10:01:10</nova:creationTime>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <nova:flavor name="m1.nano">
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <nova:memory>128</nova:memory>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <nova:disk>1</nova:disk>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <nova:swap>0</nova:swap>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <nova:vcpus>1</nova:vcpus>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      </nova:flavor>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <nova:owner>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      </nova:owner>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <nova:ports>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <nova:port uuid="24c642bf-d3e7-4003-97f5-0e43aca6db7b">
Oct  9 10:01:11 compute-0 nova_compute[187439]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        </nova:port>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      </nova:ports>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    </nova:instance>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  </metadata>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <sysinfo type="smbios">
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <system>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <entry name="manufacturer">RDO</entry>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <entry name="product">OpenStack Compute</entry>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <entry name="serial">4640d9c1-5670-4ad1-a4f3-488fb30df455</entry>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <entry name="uuid">4640d9c1-5670-4ad1-a4f3-488fb30df455</entry>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <entry name="family">Virtual Machine</entry>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    </system>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  </sysinfo>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <os>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <boot dev="hd"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <smbios mode="sysinfo"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  </os>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <features>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <acpi/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <apic/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <vmcoreinfo/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  </features>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <clock offset="utc">
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <timer name="hpet" present="no"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  </clock>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <cpu mode="host-model" match="exact">
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  </cpu>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  <devices>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <disk type="network" device="disk">
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <driver type="raw" cache="none"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <source protocol="rbd" name="vms/4640d9c1-5670-4ad1-a4f3-488fb30df455_disk">
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      </source>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <auth username="openstack">
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      </auth>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <target dev="vda" bus="virtio"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    </disk>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <disk type="network" device="cdrom">
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <driver type="raw" cache="none"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <source protocol="rbd" name="vms/4640d9c1-5670-4ad1-a4f3-488fb30df455_disk.config">
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      </source>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <auth username="openstack">
Oct  9 10:01:11 compute-0 nova_compute[187439]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      </auth>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <target dev="sda" bus="sata"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    </disk>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <interface type="ethernet">
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <mac address="fa:16:3e:d9:5b:8d"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <model type="virtio"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <mtu size="1442"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <target dev="tap24c642bf-d3"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    </interface>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <serial type="pty">
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <log file="/var/lib/nova/instances/4640d9c1-5670-4ad1-a4f3-488fb30df455/console.log" append="off"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    </serial>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <video>
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <model type="virtio"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    </video>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <input type="tablet" bus="usb"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <rng model="virtio">
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <backend model="random">/dev/urandom</backend>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    </rng>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <controller type="usb" index="0"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    <memballoon model="virtio">
Oct  9 10:01:11 compute-0 nova_compute[187439]:      <stats period="10"/>
Oct  9 10:01:11 compute-0 nova_compute[187439]:    </memballoon>
Oct  9 10:01:11 compute-0 nova_compute[187439]:  </devices>
Oct  9 10:01:11 compute-0 nova_compute[187439]: </domain>
Oct  9 10:01:11 compute-0 nova_compute[187439]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.739 2 DEBUG nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Preparing to wait for external event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.739 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.739 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.740 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.740 2 DEBUG nova.virt.libvirt.vif [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:01:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1925701042',display_name='tempest-TestNetworkBasicOps-server-1925701042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1925701042',id=8,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP5qxPoCVJd5VnANzq6gXzu8Qg3VPhJTeiwPxTw4MegyVVNhe0MLS0a5xNScn1jiWodD1exagc6TYLbTjhulbxBE5a8G/SpWx3o0pPaddfHf09aIr3WlCbNx5ag3JmOgEg==',key_name='tempest-TestNetworkBasicOps-743261970',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-das5r866',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:01:08Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=4640d9c1-5670-4ad1-a4f3-488fb30df455,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.741 2 DEBUG nova.network.os_vif_util [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.741 2 DEBUG nova.network.os_vif_util [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.742 2 DEBUG os_vif [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24c642bf-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24c642bf-d3, col_values=(('external_ids', {'iface-id': '24c642bf-d3e7-4003-97f5-0e43aca6db7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:5b:8d', 'vm-uuid': '4640d9c1-5670-4ad1-a4f3-488fb30df455'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:11 compute-0 NetworkManager[982]: <info>  [1760004071.7502] manager: (tap24c642bf-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.757 2 INFO os_vif [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3')#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.789 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.790 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.790 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:d9:5b:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.790 2 INFO nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Using config drive#033[00m
Oct  9 10:01:11 compute-0 nova_compute[187439]: 2025-10-09 10:01:11.815 2 DEBUG nova.storage.rbd_utils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:01:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:01:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v843: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.211 2 INFO nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Creating config drive at /var/lib/nova/instances/4640d9c1-5670-4ad1-a4f3-488fb30df455/disk.config#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.216 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4640d9c1-5670-4ad1-a4f3-488fb30df455/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqayubrti execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:12] "GET /metrics HTTP/1.1" 200 48523 "" "Prometheus/2.51.0"
Oct  9 10:01:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:12] "GET /metrics HTTP/1.1" 200 48523 "" "Prometheus/2.51.0"
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.345 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4640d9c1-5670-4ad1-a4f3-488fb30df455/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqayubrti" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.377 2 DEBUG nova.storage.rbd_utils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.382 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4640d9c1-5670-4ad1-a4f3-488fb30df455/disk.config 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.492 2 DEBUG oslo_concurrency.processutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4640d9c1-5670-4ad1-a4f3-488fb30df455/disk.config 4640d9c1-5670-4ad1-a4f3-488fb30df455_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.493 2 INFO nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Deleting local config drive /var/lib/nova/instances/4640d9c1-5670-4ad1-a4f3-488fb30df455/disk.config because it was imported into RBD.#033[00m
Oct  9 10:01:12 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct  9 10:01:12 compute-0 systemd[1]: Started libvirt secret daemon.
Oct  9 10:01:12 compute-0 kernel: tap24c642bf-d3: entered promiscuous mode
Oct  9 10:01:12 compute-0 NetworkManager[982]: <info>  [1760004072.5855] manager: (tap24c642bf-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:12 compute-0 ovn_controller[83056]: 2025-10-09T10:01:12Z|00050|binding|INFO|Claiming lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b for this chassis.
Oct  9 10:01:12 compute-0 ovn_controller[83056]: 2025-10-09T10:01:12Z|00051|binding|INFO|24c642bf-d3e7-4003-97f5-0e43aca6db7b: Claiming fa:16:3e:d9:5b:8d 10.100.0.5
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.606 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:5b:8d 10.100.0.5'], port_security=['fa:16:3e:d9:5b:8d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4640d9c1-5670-4ad1-a4f3-488fb30df455', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=887b951a-388d-4a48-aabf-54a7b01d9585, chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], logical_port=24c642bf-d3e7-4003-97f5-0e43aca6db7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.607 92053 INFO neutron.agent.ovn.metadata.agent [-] Port 24c642bf-d3e7-4003-97f5-0e43aca6db7b in datapath f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 bound to our chassis#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.608 92053 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1bd1d23-0de7-4b9c-b34f-27d8df0f3147#033[00m
Oct  9 10:01:12 compute-0 systemd-machined[143379]: New machine qemu-3-instance-00000008.
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.621 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[53a26d23-d3ed-45fc-80a2-6ea530610e19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.622 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1bd1d23-01 in ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.624 192856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1bd1d23-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.624 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[2c48399c-0568-452d-8a37-1530edd6c363]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.625 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2a4069-9c65-42c3-b187-d1fcee474ee1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000008.
Oct  9 10:01:12 compute-0 systemd-udevd[199059]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.643 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[c61d5cd9-cfe5-40bb-8d1c-ba05f0ecd973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 NetworkManager[982]: <info>  [1760004072.6528] device (tap24c642bf-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 10:01:12 compute-0 NetworkManager[982]: <info>  [1760004072.6538] device (tap24c642bf-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.673 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[770ae664-f535-4088-84cb-8a696f14c5bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 ovn_controller[83056]: 2025-10-09T10:01:12Z|00052|binding|INFO|Setting lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b ovn-installed in OVS
Oct  9 10:01:12 compute-0 ovn_controller[83056]: 2025-10-09T10:01:12Z|00053|binding|INFO|Setting lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b up in Southbound
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:12 compute-0 podman[198980]: 2025-10-09 10:01:12.700972431 +0000 UTC m=+0.170057268 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.700 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[6d986649-d752-4138-b29a-2af33bea912f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 NetworkManager[982]: <info>  [1760004072.7084] manager: (tapf1bd1d23-00): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.707 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[ffae122e-a242-4a40-b83e-6329700af1b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 systemd-udevd[199062]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.736 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[b9450f6d-ab17-4979-ba5c-078d8d18b47c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.738 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[02b3b58d-b9db-4f55-8ec5-721f2183f65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 NetworkManager[982]: <info>  [1760004072.7564] device (tapf1bd1d23-00): carrier: link connected
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.759 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[8df07d1e-22cf-4ce3-bb76-273accb69d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.775 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[8f992073-7eef-4a51-8b95-6cfe9a9767be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1bd1d23-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:76:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 175728, 'reachable_time': 21549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 199082, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.791 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[d18c1038-496a-44c7-9432-26f5302f980e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:762f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 175728, 'tstamp': 175728}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 199083, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.806 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[21f18569-5148-41d3-ae0f-2d6507f57c38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1bd1d23-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:76:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 175728, 'reachable_time': 21549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 199084, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.840 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[9541a043-9174-45cd-b3b1-97a3be89660c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.858 2 DEBUG nova.compute.manager [req-faefc2d2-ae32-4f89-8975-461e4f20f149 req-72c43077-7371-46a3-8acc-3892d1107d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Received event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.859 2 DEBUG oslo_concurrency.lockutils [req-faefc2d2-ae32-4f89-8975-461e4f20f149 req-72c43077-7371-46a3-8acc-3892d1107d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.859 2 DEBUG oslo_concurrency.lockutils [req-faefc2d2-ae32-4f89-8975-461e4f20f149 req-72c43077-7371-46a3-8acc-3892d1107d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.859 2 DEBUG oslo_concurrency.lockutils [req-faefc2d2-ae32-4f89-8975-461e4f20f149 req-72c43077-7371-46a3-8acc-3892d1107d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.859 2 DEBUG nova.compute.manager [req-faefc2d2-ae32-4f89-8975-461e4f20f149 req-72c43077-7371-46a3-8acc-3892d1107d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Processing event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.895 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd360ca-bb0d-4783-8997-313d2a651ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.896 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1bd1d23-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.897 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.897 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1bd1d23-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:12 compute-0 NetworkManager[982]: <info>  [1760004072.8997] manager: (tapf1bd1d23-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct  9 10:01:12 compute-0 kernel: tapf1bd1d23-00: entered promiscuous mode
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.906 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1bd1d23-00, col_values=(('external_ids', {'iface-id': '8eb8f8eb-7931-447c-950a-c32841e79526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:12 compute-0 ovn_controller[83056]: 2025-10-09T10:01:12Z|00054|binding|INFO|Releasing lport 8eb8f8eb-7931-447c-950a-c32841e79526 from this chassis (sb_readonly=0)
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.910 92053 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.911 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[5a963924-0488-4ea7-b5f5-53f8803212ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.912 92053 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: global
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    log         /dev/log local0 debug
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    log-tag     haproxy-metadata-proxy-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    user        root
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    group       root
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    maxconn     1024
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    pidfile     /var/lib/neutron/external/pids/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.pid.haproxy
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    daemon
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: defaults
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    log global
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    mode http
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    option httplog
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    option dontlognull
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    option http-server-close
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    option forwardfor
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    retries                 3
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    timeout http-request    30s
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    timeout connect         30s
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    timeout client          32s
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    timeout server          32s
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    timeout http-keep-alive 30s
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: listen listener
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    bind 169.254.169.254:80
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]:    http-request add-header X-OVN-Network-ID f1bd1d23-0de7-4b9c-b34f-27d8df0f3147
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 10:01:12 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:12.913 92053 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'env', 'PROCESS_TAG=haproxy-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 10:01:12 compute-0 nova_compute[187439]: 2025-10-09 10:01:12.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:13 compute-0 podman[199153]: 2025-10-09 10:01:13.268017219 +0000 UTC m=+0.044413180 container create f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 10:01:13 compute-0 systemd[1]: Started libpod-conmon-f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af.scope.
Oct  9 10:01:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec29531454ae92ab45c5fe4226848b1e13224c53991176600122c2e955ac8d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:13 compute-0 podman[199153]: 2025-10-09 10:01:13.247006519 +0000 UTC m=+0.023402490 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 10:01:13 compute-0 podman[199153]: 2025-10-09 10:01:13.346007908 +0000 UTC m=+0.122403869 container init f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  9 10:01:13 compute-0 podman[199153]: 2025-10-09 10:01:13.351366611 +0000 UTC m=+0.127762562 container start f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  9 10:01:13 compute-0 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[199166]: [NOTICE]   (199171) : New worker (199173) forked
Oct  9 10:01:13 compute-0 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[199166]: [NOTICE]   (199171) : Loading success.
Oct  9 10:01:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:13.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.649 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760004073.6489308, 4640d9c1-5670-4ad1-a4f3-488fb30df455 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.650 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] VM Started (Lifecycle Event)#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.652 2 DEBUG nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.655 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.659 2 INFO nova.virt.libvirt.driver [-] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Instance spawned successfully.#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.659 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.665 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.667 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.674 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.674 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.674 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.675 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.675 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.675 2 DEBUG nova.virt.libvirt.driver [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.682 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.682 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760004073.6499405, 4640d9c1-5670-4ad1-a4f3-488fb30df455 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.682 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] VM Paused (Lifecycle Event)#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.697 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.699 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760004073.6552074, 4640d9c1-5670-4ad1-a4f3-488fb30df455 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.699 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] VM Resumed (Lifecycle Event)#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.714 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.716 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.721 2 INFO nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Took 4.80 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.721 2 DEBUG nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.727 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.766 2 INFO nova.compute.manager [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Took 5.46 seconds to build instance.#033[00m
Oct  9 10:01:13 compute-0 nova_compute[187439]: 2025-10-09 10:01:13.777 2 DEBUG oslo_concurrency.lockutils [None req-c65ab0c0-2e55-4635-937c-35b847bf74b8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v844: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 10:01:14 compute-0 nova_compute[187439]: 2025-10-09 10:01:14.917 2 DEBUG nova.compute.manager [req-308bbc5e-3921-4c6b-ac95-ba8e1387eeaa req-0a971cc8-756a-4622-89ba-269d43393401 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Received event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:14 compute-0 nova_compute[187439]: 2025-10-09 10:01:14.918 2 DEBUG oslo_concurrency.lockutils [req-308bbc5e-3921-4c6b-ac95-ba8e1387eeaa req-0a971cc8-756a-4622-89ba-269d43393401 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:14 compute-0 nova_compute[187439]: 2025-10-09 10:01:14.918 2 DEBUG oslo_concurrency.lockutils [req-308bbc5e-3921-4c6b-ac95-ba8e1387eeaa req-0a971cc8-756a-4622-89ba-269d43393401 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:14 compute-0 nova_compute[187439]: 2025-10-09 10:01:14.918 2 DEBUG oslo_concurrency.lockutils [req-308bbc5e-3921-4c6b-ac95-ba8e1387eeaa req-0a971cc8-756a-4622-89ba-269d43393401 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:14 compute-0 nova_compute[187439]: 2025-10-09 10:01:14.918 2 DEBUG nova.compute.manager [req-308bbc5e-3921-4c6b-ac95-ba8e1387eeaa req-0a971cc8-756a-4622-89ba-269d43393401 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] No waiting events found dispatching network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:01:14 compute-0 nova_compute[187439]: 2025-10-09 10:01:14.919 2 WARNING nova.compute.manager [req-308bbc5e-3921-4c6b-ac95-ba8e1387eeaa req-0a971cc8-756a-4622-89ba-269d43393401 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Received unexpected event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b for instance with vm_state active and task_state None.#033[00m
Oct  9 10:01:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:15.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:16.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v845: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  9 10:01:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:16 compute-0 nova_compute[187439]: 2025-10-09 10:01:16.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:16 compute-0 nova_compute[187439]: 2025-10-09 10:01:16.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:17.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:17.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:17.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:17.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:17 compute-0 NetworkManager[982]: <info>  [1760004077.1138] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct  9 10:01:17 compute-0 NetworkManager[982]: <info>  [1760004077.1145] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Oct  9 10:01:17 compute-0 ovn_controller[83056]: 2025-10-09T10:01:17Z|00055|binding|INFO|Releasing lport 8eb8f8eb-7931-447c-950a-c32841e79526 from this chassis (sb_readonly=0)
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 ovn_controller[83056]: 2025-10-09T10:01:17Z|00056|binding|INFO|Releasing lport 8eb8f8eb-7931-447c-950a-c32841e79526 from this chassis (sb_readonly=0)
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.335 2 DEBUG nova.compute.manager [req-d4e2dbe4-9a3d-4a1e-8a5c-2c1726745948 req-3e34b0c6-7659-4344-9226-89f265d546a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Received event network-changed-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.336 2 DEBUG nova.compute.manager [req-d4e2dbe4-9a3d-4a1e-8a5c-2c1726745948 req-3e34b0c6-7659-4344-9226-89f265d546a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Refreshing instance network info cache due to event network-changed-24c642bf-d3e7-4003-97f5-0e43aca6db7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.337 2 DEBUG oslo_concurrency.lockutils [req-d4e2dbe4-9a3d-4a1e-8a5c-2c1726745948 req-3e34b0c6-7659-4344-9226-89f265d546a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-4640d9c1-5670-4ad1-a4f3-488fb30df455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.337 2 DEBUG oslo_concurrency.lockutils [req-d4e2dbe4-9a3d-4a1e-8a5c-2c1726745948 req-3e34b0c6-7659-4344-9226-89f265d546a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-4640d9c1-5670-4ad1-a4f3-488fb30df455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.337 2 DEBUG nova.network.neutron [req-d4e2dbe4-9a3d-4a1e-8a5c-2c1726745948 req-3e34b0c6-7659-4344-9226-89f265d546a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Refreshing network info cache for port 24c642bf-d3e7-4003-97f5-0e43aca6db7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.478 2 DEBUG oslo_concurrency.lockutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "4640d9c1-5670-4ad1-a4f3-488fb30df455" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.478 2 DEBUG oslo_concurrency.lockutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.479 2 DEBUG oslo_concurrency.lockutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.479 2 DEBUG oslo_concurrency.lockutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.479 2 DEBUG oslo_concurrency.lockutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.480 2 INFO nova.compute.manager [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Terminating instance#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.481 2 DEBUG nova.compute.manager [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 10:01:17 compute-0 kernel: tap24c642bf-d3 (unregistering): left promiscuous mode
Oct  9 10:01:17 compute-0 NetworkManager[982]: <info>  [1760004077.5100] device (tap24c642bf-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:01:17 compute-0 ovn_controller[83056]: 2025-10-09T10:01:17Z|00057|binding|INFO|Releasing lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b from this chassis (sb_readonly=0)
Oct  9 10:01:17 compute-0 ovn_controller[83056]: 2025-10-09T10:01:17Z|00058|binding|INFO|Setting lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b down in Southbound
Oct  9 10:01:17 compute-0 ovn_controller[83056]: 2025-10-09T10:01:17Z|00059|binding|INFO|Removing iface tap24c642bf-d3 ovn-installed in OVS
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.537 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:5b:8d 10.100.0.5'], port_security=['fa:16:3e:d9:5b:8d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4640d9c1-5670-4ad1-a4f3-488fb30df455', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=887b951a-388d-4a48-aabf-54a7b01d9585, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], logical_port=24c642bf-d3e7-4003-97f5-0e43aca6db7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.539 92053 INFO neutron.agent.ovn.metadata.agent [-] Port 24c642bf-d3e7-4003-97f5-0e43aca6db7b in datapath f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 unbound from our chassis#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.539 92053 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.541 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[ee427bff-29b4-463c-badd-8c9a45dbd8c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.541 92053 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 namespace which is not needed anymore#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct  9 10:01:17 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Consumed 4.797s CPU time.
Oct  9 10:01:17 compute-0 systemd-machined[143379]: Machine qemu-3-instance-00000008 terminated.
Oct  9 10:01:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:17.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:17 compute-0 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[199166]: [NOTICE]   (199171) : haproxy version is 2.8.14-c23fe91
Oct  9 10:01:17 compute-0 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[199166]: [NOTICE]   (199171) : path to executable is /usr/sbin/haproxy
Oct  9 10:01:17 compute-0 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[199166]: [WARNING]  (199171) : Exiting Master process...
Oct  9 10:01:17 compute-0 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[199166]: [WARNING]  (199171) : Exiting Master process...
Oct  9 10:01:17 compute-0 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[199166]: [ALERT]    (199171) : Current worker (199173) exited with code 143 (Terminated)
Oct  9 10:01:17 compute-0 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[199166]: [WARNING]  (199171) : All workers exited. Exiting... (0)
Oct  9 10:01:17 compute-0 systemd[1]: libpod-f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af.scope: Deactivated successfully.
Oct  9 10:01:17 compute-0 podman[199203]: 2025-10-09 10:01:17.671533193 +0000 UTC m=+0.043559250 container died f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:01:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af-userdata-shm.mount: Deactivated successfully.
Oct  9 10:01:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-dec29531454ae92ab45c5fe4226848b1e13224c53991176600122c2e955ac8d4-merged.mount: Deactivated successfully.
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 podman[199203]: 2025-10-09 10:01:17.703877058 +0000 UTC m=+0.075903116 container cleanup f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.716 2 INFO nova.virt.libvirt.driver [-] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Instance destroyed successfully.#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.717 2 DEBUG nova.objects.instance [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid 4640d9c1-5670-4ad1-a4f3-488fb30df455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.729 2 DEBUG nova.virt.libvirt.vif [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T10:01:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1925701042',display_name='tempest-TestNetworkBasicOps-server-1925701042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1925701042',id=8,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP5qxPoCVJd5VnANzq6gXzu8Qg3VPhJTeiwPxTw4MegyVVNhe0MLS0a5xNScn1jiWodD1exagc6TYLbTjhulbxBE5a8G/SpWx3o0pPaddfHf09aIr3WlCbNx5ag3JmOgEg==',key_name='tempest-TestNetworkBasicOps-743261970',keypairs=<?>,launch_index=0,launched_at=2025-10-09T10:01:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-das5r866',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T10:01:13Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=4640d9c1-5670-4ad1-a4f3-488fb30df455,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.729 2 DEBUG nova.network.os_vif_util [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.730 2 DEBUG nova.network.os_vif_util [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.731 2 DEBUG os_vif [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 systemd[1]: libpod-conmon-f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af.scope: Deactivated successfully.
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24c642bf-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.742 2 INFO os_vif [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3')#033[00m
Oct  9 10:01:17 compute-0 podman[199234]: 2025-10-09 10:01:17.778064618 +0000 UTC m=+0.039454973 container remove f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.784 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[e9afc770-3709-4966-9511-602aea12b4a8]: (4, ('Thu Oct  9 10:01:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 (f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af)\nf2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af\nThu Oct  9 10:01:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 (f2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af)\nf2e2c51f027c2199ef1b56b6010329f8060b2747523d61613da01fe6595c27af\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.786 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[d598f6fd-d3f7-40aa-8c86-fe3bee2164ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.787 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1bd1d23-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 kernel: tapf1bd1d23-00: left promiscuous mode
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.812 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd4d7e6-a112-442b-95aa-499ce84050d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.833 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[1eeb76e1-a1d4-46d4-9b79-8b9f4249e6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.835 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1c5a10-86ee-488e-9b5f-44504aadb41b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.855 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[10c62174-079f-477b-9fca-290619093fe8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 175722, 'reachable_time': 20524, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 199265, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:17 compute-0 systemd[1]: run-netns-ovnmeta\x2df1bd1d23\x2d0de7\x2d4b9c\x2db34f\x2d27d8df0f3147.mount: Deactivated successfully.
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.859 92357 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 10:01:17 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:17.859 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[12a5bead-338f-4694-b827-e73b8fe33d00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.937 2 INFO nova.virt.libvirt.driver [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Deleting instance files /var/lib/nova/instances/4640d9c1-5670-4ad1-a4f3-488fb30df455_del#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.938 2 INFO nova.virt.libvirt.driver [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Deletion of /var/lib/nova/instances/4640d9c1-5670-4ad1-a4f3-488fb30df455_del complete#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.975 2 INFO nova.compute.manager [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Took 0.49 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.975 2 DEBUG oslo.service.loopingcall [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.975 2 DEBUG nova.compute.manager [-] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 10:01:17 compute-0 nova_compute[187439]: 2025-10-09 10:01:17.975 2 DEBUG nova.network.neutron [-] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 10:01:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:18.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v846: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  9 10:01:18 compute-0 nova_compute[187439]: 2025-10-09 10:01:18.257 2 DEBUG nova.network.neutron [req-d4e2dbe4-9a3d-4a1e-8a5c-2c1726745948 req-3e34b0c6-7659-4344-9226-89f265d546a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Updated VIF entry in instance network info cache for port 24c642bf-d3e7-4003-97f5-0e43aca6db7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:01:18 compute-0 nova_compute[187439]: 2025-10-09 10:01:18.257 2 DEBUG nova.network.neutron [req-d4e2dbe4-9a3d-4a1e-8a5c-2c1726745948 req-3e34b0c6-7659-4344-9226-89f265d546a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Updating instance_info_cache with network_info: [{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:01:18 compute-0 nova_compute[187439]: 2025-10-09 10:01:18.271 2 DEBUG oslo_concurrency.lockutils [req-d4e2dbe4-9a3d-4a1e-8a5c-2c1726745948 req-3e34b0c6-7659-4344-9226-89f265d546a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-4640d9c1-5670-4ad1-a4f3-488fb30df455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:01:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:18.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:18.918Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:18.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:18.919Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.179 2 DEBUG nova.network.neutron [-] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.189 2 INFO nova.compute.manager [-] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Took 1.21 seconds to deallocate network for instance.#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.217 2 DEBUG oslo_concurrency.lockutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.218 2 DEBUG oslo_concurrency.lockutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.255 2 DEBUG oslo_concurrency.processutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.388 2 DEBUG nova.compute.manager [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Received event network-vif-unplugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.389 2 DEBUG oslo_concurrency.lockutils [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.389 2 DEBUG oslo_concurrency.lockutils [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.389 2 DEBUG oslo_concurrency.lockutils [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.390 2 DEBUG nova.compute.manager [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] No waiting events found dispatching network-vif-unplugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.390 2 WARNING nova.compute.manager [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Received unexpected event network-vif-unplugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.390 2 DEBUG nova.compute.manager [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Received event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.390 2 DEBUG oslo_concurrency.lockutils [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.390 2 DEBUG oslo_concurrency.lockutils [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.390 2 DEBUG oslo_concurrency.lockutils [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.391 2 DEBUG nova.compute.manager [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] No waiting events found dispatching network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.391 2 WARNING nova.compute.manager [req-0fd1a5a5-1848-4a69-9bbb-c724bc9e5d0d req-13c2e7ba-5f87-437c-89d0-d4fd35c504bc b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Received unexpected event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:01:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:01:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:01:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2235098991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:01:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:01:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:19.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.643 2 DEBUG oslo_concurrency.processutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.648 2 DEBUG nova.compute.provider_tree [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:01:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:01:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.661 2 DEBUG nova.scheduler.client.report [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.673 2 DEBUG oslo_concurrency.lockutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.692 2 INFO nova.scheduler.client.report [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance 4640d9c1-5670-4ad1-a4f3-488fb30df455#033[00m
Oct  9 10:01:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:01:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:01:19 compute-0 nova_compute[187439]: 2025-10-09 10:01:19.735 2 DEBUG oslo_concurrency.lockutils [None req-0d086ce1-5b54-416f-a73b-81d0f06be530 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "4640d9c1-5670-4ad1-a4f3-488fb30df455" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v847: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  9 10:01:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:21 compute-0 nova_compute[187439]: 2025-10-09 10:01:21.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:22.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v848: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct  9 10:01:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:22] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 10:01:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:22] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 10:01:22 compute-0 nova_compute[187439]: 2025-10-09 10:01:22.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:23 compute-0 podman[199296]: 2025-10-09 10:01:23.601656806 +0000 UTC m=+0.039558187 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct  9 10:01:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:23.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:24.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v849: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct  9 10:01:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:25.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:26.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v850: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct  9 10:01:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:01:26 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:01:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:01:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:01:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v851: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Oct  9 10:01:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:01:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 10:01:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:01:26 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:01:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:01:26 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:01:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:01:26 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:01:26 compute-0 nova_compute[187439]: 2025-10-09 10:01:26.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:27.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:27.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:27.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:27.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:27 compute-0 podman[199476]: 2025-10-09 10:01:27.102179031 +0000 UTC m=+0.037126474 container create c7837201c71177902f40bdd38243b69cb135b0b4bd9e0eb54a8c2ad0a8cc7fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:01:27 compute-0 systemd[1]: Started libpod-conmon-c7837201c71177902f40bdd38243b69cb135b0b4bd9e0eb54a8c2ad0a8cc7fd8.scope.
Oct  9 10:01:27 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:01:27 compute-0 podman[199476]: 2025-10-09 10:01:27.167476592 +0000 UTC m=+0.102424054 container init c7837201c71177902f40bdd38243b69cb135b0b4bd9e0eb54a8c2ad0a8cc7fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:01:27 compute-0 podman[199476]: 2025-10-09 10:01:27.173413525 +0000 UTC m=+0.108360957 container start c7837201c71177902f40bdd38243b69cb135b0b4bd9e0eb54a8c2ad0a8cc7fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 10:01:27 compute-0 podman[199476]: 2025-10-09 10:01:27.174675984 +0000 UTC m=+0.109623426 container attach c7837201c71177902f40bdd38243b69cb135b0b4bd9e0eb54a8c2ad0a8cc7fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tesla, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 10:01:27 compute-0 systemd[1]: libpod-c7837201c71177902f40bdd38243b69cb135b0b4bd9e0eb54a8c2ad0a8cc7fd8.scope: Deactivated successfully.
Oct  9 10:01:27 compute-0 focused_tesla[199489]: 167 167
Oct  9 10:01:27 compute-0 conmon[199489]: conmon c7837201c71177902f40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7837201c71177902f40bdd38243b69cb135b0b4bd9e0eb54a8c2ad0a8cc7fd8.scope/container/memory.events
Oct  9 10:01:27 compute-0 podman[199476]: 2025-10-09 10:01:27.180186292 +0000 UTC m=+0.115133995 container died c7837201c71177902f40bdd38243b69cb135b0b4bd9e0eb54a8c2ad0a8cc7fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:01:27 compute-0 podman[199476]: 2025-10-09 10:01:27.086935213 +0000 UTC m=+0.021882675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:01:27 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:01:27 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:27 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:27 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-426c9f777c390b4e8bee8a464a8f1c2985cde1d085e13ee0770f7255c7dea852-merged.mount: Deactivated successfully.
Oct  9 10:01:27 compute-0 podman[199476]: 2025-10-09 10:01:27.203638876 +0000 UTC m=+0.138586318 container remove c7837201c71177902f40bdd38243b69cb135b0b4bd9e0eb54a8c2ad0a8cc7fd8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tesla, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 10:01:27 compute-0 systemd[1]: libpod-conmon-c7837201c71177902f40bdd38243b69cb135b0b4bd9e0eb54a8c2ad0a8cc7fd8.scope: Deactivated successfully.
Oct  9 10:01:27 compute-0 podman[199510]: 2025-10-09 10:01:27.34285523 +0000 UTC m=+0.040591173 container create 60072cfd76f18a91601c9e117c72ab2ec88b8e9809a0ec100d5046e6c3cecb74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:01:27 compute-0 systemd[1]: Started libpod-conmon-60072cfd76f18a91601c9e117c72ab2ec88b8e9809a0ec100d5046e6c3cecb74.scope.
Oct  9 10:01:27 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07cc1ea1786c6fbf3981af35c5cd458ff7d915bf94458dda626551f2500bda2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07cc1ea1786c6fbf3981af35c5cd458ff7d915bf94458dda626551f2500bda2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07cc1ea1786c6fbf3981af35c5cd458ff7d915bf94458dda626551f2500bda2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07cc1ea1786c6fbf3981af35c5cd458ff7d915bf94458dda626551f2500bda2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b07cc1ea1786c6fbf3981af35c5cd458ff7d915bf94458dda626551f2500bda2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:27 compute-0 podman[199510]: 2025-10-09 10:01:27.410232322 +0000 UTC m=+0.107968275 container init 60072cfd76f18a91601c9e117c72ab2ec88b8e9809a0ec100d5046e6c3cecb74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:01:27 compute-0 podman[199510]: 2025-10-09 10:01:27.325120006 +0000 UTC m=+0.022855959 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:01:27 compute-0 podman[199510]: 2025-10-09 10:01:27.425191093 +0000 UTC m=+0.122927026 container start 60072cfd76f18a91601c9e117c72ab2ec88b8e9809a0ec100d5046e6c3cecb74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 10:01:27 compute-0 podman[199510]: 2025-10-09 10:01:27.426586283 +0000 UTC m=+0.124322216 container attach 60072cfd76f18a91601c9e117c72ab2ec88b8e9809a0ec100d5046e6c3cecb74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct  9 10:01:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:27.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:27 compute-0 relaxed_liskov[199523]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:01:27 compute-0 relaxed_liskov[199523]: --> All data devices are unavailable
Oct  9 10:01:27 compute-0 systemd[1]: libpod-60072cfd76f18a91601c9e117c72ab2ec88b8e9809a0ec100d5046e6c3cecb74.scope: Deactivated successfully.
Oct  9 10:01:27 compute-0 podman[199510]: 2025-10-09 10:01:27.713379453 +0000 UTC m=+0.411115386 container died 60072cfd76f18a91601c9e117c72ab2ec88b8e9809a0ec100d5046e6c3cecb74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b07cc1ea1786c6fbf3981af35c5cd458ff7d915bf94458dda626551f2500bda2-merged.mount: Deactivated successfully.
Oct  9 10:01:27 compute-0 nova_compute[187439]: 2025-10-09 10:01:27.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:27 compute-0 podman[199510]: 2025-10-09 10:01:27.745101084 +0000 UTC m=+0.442837018 container remove 60072cfd76f18a91601c9e117c72ab2ec88b8e9809a0ec100d5046e6c3cecb74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:01:27 compute-0 systemd[1]: libpod-conmon-60072cfd76f18a91601c9e117c72ab2ec88b8e9809a0ec100d5046e6c3cecb74.scope: Deactivated successfully.
Oct  9 10:01:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:28.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:28 compute-0 podman[199631]: 2025-10-09 10:01:28.272457502 +0000 UTC m=+0.040101592 container create f536d204b32dddbde3d51bf73f4ed3f07818708d6daae070ae63a7df776b3c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shtern, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:01:28 compute-0 systemd[1]: Started libpod-conmon-f536d204b32dddbde3d51bf73f4ed3f07818708d6daae070ae63a7df776b3c73.scope.
Oct  9 10:01:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:01:28 compute-0 podman[199631]: 2025-10-09 10:01:28.343981279 +0000 UTC m=+0.111625369 container init f536d204b32dddbde3d51bf73f4ed3f07818708d6daae070ae63a7df776b3c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shtern, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Oct  9 10:01:28 compute-0 podman[199631]: 2025-10-09 10:01:28.349213483 +0000 UTC m=+0.116857574 container start f536d204b32dddbde3d51bf73f4ed3f07818708d6daae070ae63a7df776b3c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shtern, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 10:01:28 compute-0 podman[199631]: 2025-10-09 10:01:28.350426379 +0000 UTC m=+0.118070470 container attach f536d204b32dddbde3d51bf73f4ed3f07818708d6daae070ae63a7df776b3c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shtern, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:01:28 compute-0 sharp_shtern[199644]: 167 167
Oct  9 10:01:28 compute-0 podman[199631]: 2025-10-09 10:01:28.258957972 +0000 UTC m=+0.026602082 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:01:28 compute-0 systemd[1]: libpod-f536d204b32dddbde3d51bf73f4ed3f07818708d6daae070ae63a7df776b3c73.scope: Deactivated successfully.
Oct  9 10:01:28 compute-0 podman[199631]: 2025-10-09 10:01:28.353648765 +0000 UTC m=+0.121292855 container died f536d204b32dddbde3d51bf73f4ed3f07818708d6daae070ae63a7df776b3c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:01:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-81a7394f004f1d406e88f7e77427fcf0e2850e0e89201278119b88b702c736bc-merged.mount: Deactivated successfully.
Oct  9 10:01:28 compute-0 podman[199631]: 2025-10-09 10:01:28.377187251 +0000 UTC m=+0.144831340 container remove f536d204b32dddbde3d51bf73f4ed3f07818708d6daae070ae63a7df776b3c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 10:01:28 compute-0 systemd[1]: libpod-conmon-f536d204b32dddbde3d51bf73f4ed3f07818708d6daae070ae63a7df776b3c73.scope: Deactivated successfully.
Oct  9 10:01:28 compute-0 podman[199667]: 2025-10-09 10:01:28.522555102 +0000 UTC m=+0.035937351 container create 2a9da6c5b9056ffbcdb2835ab702b8a30763b856edd858525098272adbbedf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 10:01:28 compute-0 systemd[1]: Started libpod-conmon-2a9da6c5b9056ffbcdb2835ab702b8a30763b856edd858525098272adbbedf17.scope.
Oct  9 10:01:28 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5635d31ee3681dc9761aa1192f2f3f951627a06e0bb3fc8860bdbc09d0be05a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5635d31ee3681dc9761aa1192f2f3f951627a06e0bb3fc8860bdbc09d0be05a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5635d31ee3681dc9761aa1192f2f3f951627a06e0bb3fc8860bdbc09d0be05a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5635d31ee3681dc9761aa1192f2f3f951627a06e0bb3fc8860bdbc09d0be05a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:28 compute-0 podman[199667]: 2025-10-09 10:01:28.590780652 +0000 UTC m=+0.104162912 container init 2a9da6c5b9056ffbcdb2835ab702b8a30763b856edd858525098272adbbedf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hopper, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 10:01:28 compute-0 podman[199667]: 2025-10-09 10:01:28.597206317 +0000 UTC m=+0.110588566 container start 2a9da6c5b9056ffbcdb2835ab702b8a30763b856edd858525098272adbbedf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hopper, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:01:28 compute-0 podman[199667]: 2025-10-09 10:01:28.598378136 +0000 UTC m=+0.111760386 container attach 2a9da6c5b9056ffbcdb2835ab702b8a30763b856edd858525098272adbbedf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hopper, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  9 10:01:28 compute-0 podman[199667]: 2025-10-09 10:01:28.509920252 +0000 UTC m=+0.023302502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:01:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v852: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Oct  9 10:01:28 compute-0 funny_hopper[199681]: {
Oct  9 10:01:28 compute-0 funny_hopper[199681]:    "1": [
Oct  9 10:01:28 compute-0 funny_hopper[199681]:        {
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "devices": [
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "/dev/loop3"
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            ],
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "lv_name": "ceph_lv0",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "lv_size": "21470642176",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "name": "ceph_lv0",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "tags": {
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.cluster_name": "ceph",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.crush_device_class": "",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.encrypted": "0",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.osd_id": "1",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.type": "block",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.vdo": "0",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:                "ceph.with_tpm": "0"
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            },
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "type": "block",
Oct  9 10:01:28 compute-0 funny_hopper[199681]:            "vg_name": "ceph_vg0"
Oct  9 10:01:28 compute-0 funny_hopper[199681]:        }
Oct  9 10:01:28 compute-0 funny_hopper[199681]:    ]
Oct  9 10:01:28 compute-0 funny_hopper[199681]: }
Oct  9 10:01:28 compute-0 systemd[1]: libpod-2a9da6c5b9056ffbcdb2835ab702b8a30763b856edd858525098272adbbedf17.scope: Deactivated successfully.
Oct  9 10:01:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:28.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:28.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:28.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:28.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:28 compute-0 podman[199691]: 2025-10-09 10:01:28.943482224 +0000 UTC m=+0.036100227 container died 2a9da6c5b9056ffbcdb2835ab702b8a30763b856edd858525098272adbbedf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hopper, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:01:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5635d31ee3681dc9761aa1192f2f3f951627a06e0bb3fc8860bdbc09d0be05a0-merged.mount: Deactivated successfully.
Oct  9 10:01:28 compute-0 podman[199691]: 2025-10-09 10:01:28.969016583 +0000 UTC m=+0.061634575 container remove 2a9da6c5b9056ffbcdb2835ab702b8a30763b856edd858525098272adbbedf17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hopper, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  9 10:01:28 compute-0 systemd[1]: libpod-conmon-2a9da6c5b9056ffbcdb2835ab702b8a30763b856edd858525098272adbbedf17.scope: Deactivated successfully.
Oct  9 10:01:28 compute-0 podman[199690]: 2025-10-09 10:01:28.998784852 +0000 UTC m=+0.079453698 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct  9 10:01:29 compute-0 podman[199800]: 2025-10-09 10:01:29.481638441 +0000 UTC m=+0.034821117 container create cfdce566071b272d510b87cedfec24742d139f7070b7e0bc309328543d67fe12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Oct  9 10:01:29 compute-0 systemd[1]: Started libpod-conmon-cfdce566071b272d510b87cedfec24742d139f7070b7e0bc309328543d67fe12.scope.
Oct  9 10:01:29 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:01:29 compute-0 podman[199800]: 2025-10-09 10:01:29.542290774 +0000 UTC m=+0.095473450 container init cfdce566071b272d510b87cedfec24742d139f7070b7e0bc309328543d67fe12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:01:29 compute-0 podman[199800]: 2025-10-09 10:01:29.548508876 +0000 UTC m=+0.101691553 container start cfdce566071b272d510b87cedfec24742d139f7070b7e0bc309328543d67fe12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 10:01:29 compute-0 podman[199800]: 2025-10-09 10:01:29.549856438 +0000 UTC m=+0.103039113 container attach cfdce566071b272d510b87cedfec24742d139f7070b7e0bc309328543d67fe12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 10:01:29 compute-0 upbeat_wilbur[199813]: 167 167
Oct  9 10:01:29 compute-0 systemd[1]: libpod-cfdce566071b272d510b87cedfec24742d139f7070b7e0bc309328543d67fe12.scope: Deactivated successfully.
Oct  9 10:01:29 compute-0 podman[199800]: 2025-10-09 10:01:29.553437909 +0000 UTC m=+0.106620595 container died cfdce566071b272d510b87cedfec24742d139f7070b7e0bc309328543d67fe12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 10:01:29 compute-0 podman[199800]: 2025-10-09 10:01:29.468213992 +0000 UTC m=+0.021396668 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1024705ff276e5dd2a1ea7d3064c1ceca073b68fc4f359335d25df57155541a3-merged.mount: Deactivated successfully.
Oct  9 10:01:29 compute-0 podman[199800]: 2025-10-09 10:01:29.57071476 +0000 UTC m=+0.123897446 container remove cfdce566071b272d510b87cedfec24742d139f7070b7e0bc309328543d67fe12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_wilbur, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 10:01:29 compute-0 systemd[1]: libpod-conmon-cfdce566071b272d510b87cedfec24742d139f7070b7e0bc309328543d67fe12.scope: Deactivated successfully.
Oct  9 10:01:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:29.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:29 compute-0 podman[199834]: 2025-10-09 10:01:29.707537722 +0000 UTC m=+0.032411795 container create 9471ae50dbd1993b2cee688738afe36fc559c149ca4c12cfb4bfb1285ea8aa89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:01:29 compute-0 systemd[1]: Started libpod-conmon-9471ae50dbd1993b2cee688738afe36fc559c149ca4c12cfb4bfb1285ea8aa89.scope.
Oct  9 10:01:29 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac085ceba6ea6f2f8d98715a67a5bd1be4b5a9728edae968478221b26650723f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac085ceba6ea6f2f8d98715a67a5bd1be4b5a9728edae968478221b26650723f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac085ceba6ea6f2f8d98715a67a5bd1be4b5a9728edae968478221b26650723f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac085ceba6ea6f2f8d98715a67a5bd1be4b5a9728edae968478221b26650723f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:29 compute-0 podman[199834]: 2025-10-09 10:01:29.758882579 +0000 UTC m=+0.083756651 container init 9471ae50dbd1993b2cee688738afe36fc559c149ca4c12cfb4bfb1285ea8aa89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 10:01:29 compute-0 podman[199834]: 2025-10-09 10:01:29.766512272 +0000 UTC m=+0.091386345 container start 9471ae50dbd1993b2cee688738afe36fc559c149ca4c12cfb4bfb1285ea8aa89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:01:29 compute-0 podman[199834]: 2025-10-09 10:01:29.767644227 +0000 UTC m=+0.092518300 container attach 9471ae50dbd1993b2cee688738afe36fc559c149ca4c12cfb4bfb1285ea8aa89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:01:29 compute-0 podman[199834]: 2025-10-09 10:01:29.694687737 +0000 UTC m=+0.019561830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:01:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:30.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:30 compute-0 elegant_franklin[199847]: {}
Oct  9 10:01:30 compute-0 lvm[199925]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:01:30 compute-0 lvm[199925]: VG ceph_vg0 finished
Oct  9 10:01:30 compute-0 systemd[1]: libpod-9471ae50dbd1993b2cee688738afe36fc559c149ca4c12cfb4bfb1285ea8aa89.scope: Deactivated successfully.
Oct  9 10:01:30 compute-0 podman[199834]: 2025-10-09 10:01:30.332822444 +0000 UTC m=+0.657696518 container died 9471ae50dbd1993b2cee688738afe36fc559c149ca4c12cfb4bfb1285ea8aa89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct  9 10:01:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac085ceba6ea6f2f8d98715a67a5bd1be4b5a9728edae968478221b26650723f-merged.mount: Deactivated successfully.
Oct  9 10:01:30 compute-0 lvm[199929]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:01:30 compute-0 lvm[199929]: VG ceph_vg0 finished
Oct  9 10:01:30 compute-0 podman[199834]: 2025-10-09 10:01:30.362539308 +0000 UTC m=+0.687413381 container remove 9471ae50dbd1993b2cee688738afe36fc559c149ca4c12cfb4bfb1285ea8aa89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_franklin, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 10:01:30 compute-0 systemd[1]: libpod-conmon-9471ae50dbd1993b2cee688738afe36fc559c149ca4c12cfb4bfb1285ea8aa89.scope: Deactivated successfully.
Oct  9 10:01:30 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:01:30 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:30 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:01:30 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v853: 337 pgs: 337 active+clean; 55 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 778 KiB/s wr, 44 op/s
Oct  9 10:01:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:31 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:31 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:31.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:31 compute-0 nova_compute[187439]: 2025-10-09 10:01:31.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:32.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:32] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 10:01:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:32] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Oct  9 10:01:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v854: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Oct  9 10:01:32 compute-0 nova_compute[187439]: 2025-10-09 10:01:32.712 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004077.7118843, 4640d9c1-5670-4ad1-a4f3-488fb30df455 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:32 compute-0 nova_compute[187439]: 2025-10-09 10:01:32.713 2 INFO nova.compute.manager [-] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] VM Stopped (Lifecycle Event)#033[00m
Oct  9 10:01:32 compute-0 nova_compute[187439]: 2025-10-09 10:01:32.727 2 DEBUG nova.compute.manager [None req-01f19d0b-741f-4e4b-9054-66d867f82145 - - - - - -] [instance: 4640d9c1-5670-4ad1-a4f3-488fb30df455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:32 compute-0 nova_compute[187439]: 2025-10-09 10:01:32.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:33.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:33 compute-0 podman[199991]: 2025-10-09 10:01:33.640410977 +0000 UTC m=+0.075760126 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 10:01:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:34.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:01:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:01:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v855: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Oct  9 10:01:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:35.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:36.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v856: 337 pgs: 337 active+clean; 67 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 126 op/s
Oct  9 10:01:36 compute-0 nova_compute[187439]: 2025-10-09 10:01:36.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:37.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:37.093Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:37.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:37.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:37.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:37 compute-0 nova_compute[187439]: 2025-10-09 10:01:37.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:38.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v857: 337 pgs: 337 active+clean; 67 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct  9 10:01:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:38.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:38.922Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:38.922Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:38.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:39.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:40.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:40 compute-0 nova_compute[187439]: 2025-10-09 10:01:40.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:40 compute-0 nova_compute[187439]: 2025-10-09 10:01:40.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v858: 337 pgs: 337 active+clean; 53 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Oct  9 10:01:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:41.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:41 compute-0 nova_compute[187439]: 2025-10-09 10:01:41.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:42.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:42] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Oct  9 10:01:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:42] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Oct  9 10:01:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v859: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Oct  9 10:01:42 compute-0 nova_compute[187439]: 2025-10-09 10:01:42.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:43 compute-0 podman[200019]: 2025-10-09 10:01:43.6237191 +0000 UTC m=+0.063593460 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct  9 10:01:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:43.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:44.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v860: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct  9 10:01:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:45.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:46.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v861: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct  9 10:01:46 compute-0 nova_compute[187439]: 2025-10-09 10:01:46.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:47.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:47.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:47.088Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:47.089Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:47.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:47 compute-0 nova_compute[187439]: 2025-10-09 10:01:47.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:47 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:47.877 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:01:47 compute-0 nova_compute[187439]: 2025-10-09 10:01:47.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:47 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:47.878 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:01:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:48.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v862: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 938 B/s wr, 17 op/s
Oct  9 10:01:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:48.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:48.920Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:48.920Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:48.920Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:01:49
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'volumes', 'images', '.mgr', '.nfs', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.control']
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:01:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:01:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:01:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:49.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:01:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:01:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:50.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v863: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 938 B/s wr, 17 op/s
Oct  9 10:01:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:51.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:51 compute-0 nova_compute[187439]: 2025-10-09 10:01:51.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.058 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.059 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:52.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.069 2 DEBUG nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.126 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.127 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.133 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.133 2 INFO nova.compute.claims [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.197 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:52] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:01:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:01:52] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:01:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:52 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/503740570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.566 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.572 2 DEBUG nova.compute.provider_tree [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.587 2 DEBUG nova.scheduler.client.report [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.605 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.606 2 DEBUG nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 10:01:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v864: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 597 B/s wr, 6 op/s
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.650 2 DEBUG nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.651 2 DEBUG nova.network.neutron [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.669 2 INFO nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.685 2 DEBUG nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.749 2 DEBUG nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.750 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.751 2 INFO nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Creating image(s)#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.771 2 DEBUG nova.storage.rbd_utils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.791 2 DEBUG nova.storage.rbd_utils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.814 2 DEBUG nova.storage.rbd_utils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.819 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.873 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.873 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.874 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.874 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:52.881 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.900 2 DEBUG nova.storage.rbd_utils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.903 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:52 compute-0 nova_compute[187439]: 2025-10-09 10:01:52.944 2 DEBUG nova.policy [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 10:01:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:53 compute-0 nova_compute[187439]: 2025-10-09 10:01:53.074 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:53 compute-0 nova_compute[187439]: 2025-10-09 10:01:53.126 2 DEBUG nova.storage.rbd_utils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 10:01:53 compute-0 nova_compute[187439]: 2025-10-09 10:01:53.201 2 DEBUG nova.objects.instance [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:01:53 compute-0 nova_compute[187439]: 2025-10-09 10:01:53.216 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 10:01:53 compute-0 nova_compute[187439]: 2025-10-09 10:01:53.216 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Ensure instance console log exists: /var/lib/nova/instances/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 10:01:53 compute-0 nova_compute[187439]: 2025-10-09 10:01:53.217 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:53 compute-0 nova_compute[187439]: 2025-10-09 10:01:53.217 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:53 compute-0 nova_compute[187439]: 2025-10-09 10:01:53.217 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:53 compute-0 nova_compute[187439]: 2025-10-09 10:01:53.267 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:53 compute-0 nova_compute[187439]: 2025-10-09 10:01:53.448 2 DEBUG nova.network.neutron [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Successfully created port: b6adb2d1-94ef-4149-bd66-a2c5929ce9bc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 10:01:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:53.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.253 2 DEBUG nova.network.neutron [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Successfully updated port: b6adb2d1-94ef-4149-bd66-a2c5929ce9bc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.264 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.264 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.264 2 DEBUG nova.network.neutron [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.312 2 DEBUG nova.compute.manager [req-e42b4938-98a1-4648-b21c-cb7878128768 req-3223706e-3d4a-4bf8-8ed3-008a2ce45f98 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received event network-changed-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.312 2 DEBUG nova.compute.manager [req-e42b4938-98a1-4648-b21c-cb7878128768 req-3223706e-3d4a-4bf8-8ed3-008a2ce45f98 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Refreshing instance network info cache due to event network-changed-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.313 2 DEBUG oslo_concurrency.lockutils [req-e42b4938-98a1-4648-b21c-cb7878128768 req-3223706e-3d4a-4bf8-8ed3-008a2ce45f98 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.357 2 DEBUG nova.network.neutron [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 10:01:54 compute-0 podman[200267]: 2025-10-09 10:01:54.620006389 +0000 UTC m=+0.056934376 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:01:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v865: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.848 2 DEBUG nova.network.neutron [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Updating instance_info_cache with network_info: [{"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.864 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.864 2 DEBUG nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Instance network_info: |[{"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.865 2 DEBUG oslo_concurrency.lockutils [req-e42b4938-98a1-4648-b21c-cb7878128768 req-3223706e-3d4a-4bf8-8ed3-008a2ce45f98 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.865 2 DEBUG nova.network.neutron [req-e42b4938-98a1-4648-b21c-cb7878128768 req-3223706e-3d4a-4bf8-8ed3-008a2ce45f98 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Refreshing network info cache for port b6adb2d1-94ef-4149-bd66-a2c5929ce9bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.867 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Start _get_guest_xml network_info=[{"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'encryption_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'guest_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.871 2 WARNING nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.878 2 DEBUG nova.virt.libvirt.host [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.878 2 DEBUG nova.virt.libvirt.host [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.881 2 DEBUG nova.virt.libvirt.host [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.881 2 DEBUG nova.virt.libvirt.host [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.882 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.882 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.882 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.882 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.883 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.883 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.883 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.883 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.883 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.884 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.884 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.884 2 DEBUG nova.virt.hardware [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 10:01:54 compute-0 nova_compute[187439]: 2025-10-09 10:01:54.886 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:01:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1985554974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.248 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.267 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.268 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.268 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.269 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.269 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.295 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.322 2 DEBUG nova.storage.rbd_utils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.325 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940217175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:55.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.663 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.665 2 DEBUG nova.network.neutron [req-e42b4938-98a1-4648-b21c-cb7878128768 req-3223706e-3d4a-4bf8-8ed3-008a2ce45f98 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Updated VIF entry in instance network info cache for port b6adb2d1-94ef-4149-bd66-a2c5929ce9bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.666 2 DEBUG nova.network.neutron [req-e42b4938-98a1-4648-b21c-cb7878128768 req-3223706e-3d4a-4bf8-8ed3-008a2ce45f98 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Updating instance_info_cache with network_info: [{"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.682 2 DEBUG oslo_concurrency.lockutils [req-e42b4938-98a1-4648-b21c-cb7878128768 req-3223706e-3d4a-4bf8-8ed3-008a2ce45f98 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:01:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:01:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470198818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.732 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.734 2 DEBUG nova.virt.libvirt.vif [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-789438469',display_name='tempest-TestNetworkBasicOps-server-789438469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-789438469',id=10,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz2csLjROGW6puiUsq/DK2eUbSDHWua4HHP2HjVlMnr0HbBhHui8Uqq2Xb0MZ1EWl06DycuVyFM5Y0YKQRB2ZA67Qs7u+8VWufSHktRa+mDSGpGFK4bz0s+bo0+BkmtQw==',key_name='tempest-TestNetworkBasicOps-1863507351',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-a0it4ou7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:01:52Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.734 2 DEBUG nova.network.os_vif_util [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.735 2 DEBUG nova.network.os_vif_util [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:83,bridge_name='br-int',has_traffic_filtering=True,id=b6adb2d1-94ef-4149-bd66-a2c5929ce9bc,network=Network(0bf47658-57ad-4261-84b9-ab85a8c5f02f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6adb2d1-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.736 2 DEBUG nova.objects.instance [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.746 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] End _get_guest_xml xml=<domain type="kvm">
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <uuid>eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2</uuid>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <name>instance-0000000a</name>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <memory>131072</memory>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <vcpu>1</vcpu>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <metadata>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <nova:name>tempest-TestNetworkBasicOps-server-789438469</nova:name>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <nova:creationTime>2025-10-09 10:01:54</nova:creationTime>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <nova:flavor name="m1.nano">
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <nova:memory>128</nova:memory>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <nova:disk>1</nova:disk>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <nova:swap>0</nova:swap>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <nova:vcpus>1</nova:vcpus>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      </nova:flavor>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <nova:owner>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      </nova:owner>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <nova:ports>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <nova:port uuid="b6adb2d1-94ef-4149-bd66-a2c5929ce9bc">
Oct  9 10:01:55 compute-0 nova_compute[187439]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        </nova:port>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      </nova:ports>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    </nova:instance>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  </metadata>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <sysinfo type="smbios">
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <system>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <entry name="manufacturer">RDO</entry>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <entry name="product">OpenStack Compute</entry>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <entry name="serial">eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2</entry>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <entry name="uuid">eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2</entry>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <entry name="family">Virtual Machine</entry>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    </system>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  </sysinfo>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <os>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <boot dev="hd"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <smbios mode="sysinfo"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  </os>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <features>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <acpi/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <apic/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <vmcoreinfo/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  </features>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <clock offset="utc">
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <timer name="hpet" present="no"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  </clock>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <cpu mode="host-model" match="exact">
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  </cpu>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  <devices>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <disk type="network" device="disk">
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <driver type="raw" cache="none"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <source protocol="rbd" name="vms/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk">
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      </source>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <auth username="openstack">
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      </auth>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <target dev="vda" bus="virtio"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    </disk>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <disk type="network" device="cdrom">
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <driver type="raw" cache="none"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <source protocol="rbd" name="vms/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk.config">
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      </source>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <auth username="openstack">
Oct  9 10:01:55 compute-0 nova_compute[187439]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      </auth>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <target dev="sda" bus="sata"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    </disk>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <interface type="ethernet">
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <mac address="fa:16:3e:2a:0b:83"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <model type="virtio"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <mtu size="1442"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <target dev="tapb6adb2d1-94"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    </interface>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <serial type="pty">
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <log file="/var/lib/nova/instances/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2/console.log" append="off"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    </serial>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <video>
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <model type="virtio"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    </video>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <input type="tablet" bus="usb"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <rng model="virtio">
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <backend model="random">/dev/urandom</backend>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    </rng>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <controller type="usb" index="0"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    <memballoon model="virtio">
Oct  9 10:01:55 compute-0 nova_compute[187439]:      <stats period="10"/>
Oct  9 10:01:55 compute-0 nova_compute[187439]:    </memballoon>
Oct  9 10:01:55 compute-0 nova_compute[187439]:  </devices>
Oct  9 10:01:55 compute-0 nova_compute[187439]: </domain>
Oct  9 10:01:55 compute-0 nova_compute[187439]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.748 2 DEBUG nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Preparing to wait for external event network-vif-plugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.748 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.748 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.748 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.749 2 DEBUG nova.virt.libvirt.vif [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-789438469',display_name='tempest-TestNetworkBasicOps-server-789438469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-789438469',id=10,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz2csLjROGW6puiUsq/DK2eUbSDHWua4HHP2HjVlMnr0HbBhHui8Uqq2Xb0MZ1EWl06DycuVyFM5Y0YKQRB2ZA67Qs7u+8VWufSHktRa+mDSGpGFK4bz0s+bo0+BkmtQw==',key_name='tempest-TestNetworkBasicOps-1863507351',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-a0it4ou7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:01:52Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.749 2 DEBUG nova.network.os_vif_util [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.750 2 DEBUG nova.network.os_vif_util [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:83,bridge_name='br-int',has_traffic_filtering=True,id=b6adb2d1-94ef-4149-bd66-a2c5929ce9bc,network=Network(0bf47658-57ad-4261-84b9-ab85a8c5f02f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6adb2d1-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.750 2 DEBUG os_vif [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:83,bridge_name='br-int',has_traffic_filtering=True,id=b6adb2d1-94ef-4149-bd66-a2c5929ce9bc,network=Network(0bf47658-57ad-4261-84b9-ab85a8c5f02f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6adb2d1-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6adb2d1-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6adb2d1-94, col_values=(('external_ids', {'iface-id': 'b6adb2d1-94ef-4149-bd66-a2c5929ce9bc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:0b:83', 'vm-uuid': 'eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:55 compute-0 NetworkManager[982]: <info>  [1760004115.7588] manager: (tapb6adb2d1-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.768 2 INFO os_vif [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:0b:83,bridge_name='br-int',has_traffic_filtering=True,id=b6adb2d1-94ef-4149-bd66-a2c5929ce9bc,network=Network(0bf47658-57ad-4261-84b9-ab85a8c5f02f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6adb2d1-94')#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.798 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.799 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.799 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:2a:0b:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.800 2 INFO nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Using config drive#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.820 2 DEBUG nova.storage.rbd_utils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.977 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.979 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4683MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.979 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:55 compute-0 nova_compute[187439]: 2025-10-09 10:01:55.979 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.020 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Instance eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.020 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.020 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.043 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:56.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.192 2 INFO nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Creating config drive at /var/lib/nova/instances/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2/disk.config#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.196 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0qlqk3w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.321 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0qlqk3w" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.349 2 DEBUG nova.storage.rbd_utils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.353 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2/disk.config eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:56 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2633646493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.416 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.423 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.437 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.453 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.454 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.461 2 DEBUG oslo_concurrency.processutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2/disk.config eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.462 2 INFO nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Deleting local config drive /var/lib/nova/instances/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2/disk.config because it was imported into RBD.#033[00m
Oct  9 10:01:56 compute-0 kernel: tapb6adb2d1-94: entered promiscuous mode
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:56 compute-0 ovn_controller[83056]: 2025-10-09T10:01:56Z|00060|binding|INFO|Claiming lport b6adb2d1-94ef-4149-bd66-a2c5929ce9bc for this chassis.
Oct  9 10:01:56 compute-0 ovn_controller[83056]: 2025-10-09T10:01:56Z|00061|binding|INFO|b6adb2d1-94ef-4149-bd66-a2c5929ce9bc: Claiming fa:16:3e:2a:0b:83 10.100.0.5
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:56 compute-0 NetworkManager[982]: <info>  [1760004116.5196] manager: (tapb6adb2d1-94): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.521 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:0b:83 10.100.0.5'], port_security=['fa:16:3e:2a:0b:83 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bf47658-57ad-4261-84b9-ab85a8c5f02f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5cf6ff21-3de7-4af5-a258-b8c74aff1da4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75d55572-a5af-4670-bdea-a06a1b91e1e6, chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], logical_port=b6adb2d1-94ef-4149-bd66-a2c5929ce9bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.522 92053 INFO neutron.agent.ovn.metadata.agent [-] Port b6adb2d1-94ef-4149-bd66-a2c5929ce9bc in datapath 0bf47658-57ad-4261-84b9-ab85a8c5f02f bound to our chassis#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.523 92053 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0bf47658-57ad-4261-84b9-ab85a8c5f02f#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.535 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[423f2f9d-0243-42a3-88a9-f438fbfbb91a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.535 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0bf47658-51 in ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.537 192856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0bf47658-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.537 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[66283551-39cf-4d1f-afef-c5383423683d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.538 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a18202-ae06-4f7c-9f39-d414a55a6ddc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 systemd-machined[143379]: New machine qemu-4-instance-0000000a.
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.551 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[3693d70d-0891-4b85-9585-dba94a2b1b90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 systemd-udevd[200467]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:01:56 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Oct  9 10:01:56 compute-0 NetworkManager[982]: <info>  [1760004116.5692] device (tapb6adb2d1-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 10:01:56 compute-0 NetworkManager[982]: <info>  [1760004116.5702] device (tapb6adb2d1-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.571 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[1241a08b-d6f4-44d8-a16d-2a23b802bee2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.596 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1517bb-6cbf-448d-8138-4459cb535245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.601 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[99205eec-5885-4aaf-be79-2c0e4d103f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 NetworkManager[982]: <info>  [1760004116.6028] manager: (tap0bf47658-50): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Oct  9 10:01:56 compute-0 systemd-udevd[200470]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:56 compute-0 ovn_controller[83056]: 2025-10-09T10:01:56Z|00062|binding|INFO|Setting lport b6adb2d1-94ef-4149-bd66-a2c5929ce9bc ovn-installed in OVS
Oct  9 10:01:56 compute-0 ovn_controller[83056]: 2025-10-09T10:01:56Z|00063|binding|INFO|Setting lport b6adb2d1-94ef-4149-bd66-a2c5929ce9bc up in Southbound
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v866: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.640 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[8c450a5e-789e-4109-9eba-022763fc3751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.643 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[02e20573-9c20-45ed-ab6e-ec1fcf5bcdd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 NetworkManager[982]: <info>  [1760004116.6593] device (tap0bf47658-50): carrier: link connected
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.663 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[38ba572e-104e-4376-b93a-1d2e5d4a1d99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.675 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[fc84a0a6-c4a5-4b58-9784-7c61fd957f08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bf47658-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:aa:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 180119, 'reachable_time': 21921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 200490, 'error': None, 'target': 'ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.684 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[bccbd948-8a96-4a77-93ca-b3c28b0fe173]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:aa88'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 180119, 'tstamp': 180119}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 200491, 'error': None, 'target': 'ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.695 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[7d275647-e82f-4a86-a82d-e37d0cd251d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bf47658-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:aa:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 180119, 'reachable_time': 21921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 200492, 'error': None, 'target': 'ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.713 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[05f042a7-24bf-47f0-aed7-891294056748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.734 2 DEBUG nova.compute.manager [req-13537ab7-f820-4525-a84d-e195d6209a63 req-e1948406-706b-46d8-9c76-b46659efc1d5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received event network-vif-plugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.734 2 DEBUG oslo_concurrency.lockutils [req-13537ab7-f820-4525-a84d-e195d6209a63 req-e1948406-706b-46d8-9c76-b46659efc1d5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.735 2 DEBUG oslo_concurrency.lockutils [req-13537ab7-f820-4525-a84d-e195d6209a63 req-e1948406-706b-46d8-9c76-b46659efc1d5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.735 2 DEBUG oslo_concurrency.lockutils [req-13537ab7-f820-4525-a84d-e195d6209a63 req-e1948406-706b-46d8-9c76-b46659efc1d5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.735 2 DEBUG nova.compute.manager [req-13537ab7-f820-4525-a84d-e195d6209a63 req-e1948406-706b-46d8-9c76-b46659efc1d5 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Processing event network-vif-plugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.751 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[f71853da-0ad7-4019-bf0f-1d276f6b3bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.753 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bf47658-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.753 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.753 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bf47658-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:56 compute-0 kernel: tap0bf47658-50: entered promiscuous mode
Oct  9 10:01:56 compute-0 NetworkManager[982]: <info>  [1760004116.7562] manager: (tap0bf47658-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.759 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0bf47658-50, col_values=(('external_ids', {'iface-id': 'a1fa08b5-bd0f-4c7f-b6cf-6b20fdb83e17'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:56 compute-0 ovn_controller[83056]: 2025-10-09T10:01:56Z|00064|binding|INFO|Releasing lport a1fa08b5-bd0f-4c7f-b6cf-6b20fdb83e17 from this chassis (sb_readonly=0)
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.779 92053 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0bf47658-57ad-4261-84b9-ab85a8c5f02f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0bf47658-57ad-4261-84b9-ab85a8c5f02f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.780 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[808cccf6-d569-445b-b2db-b72f7fc444e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.781 92053 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: global
Oct  9 10:01:56 compute-0 nova_compute[187439]: 2025-10-09 10:01:56.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    log         /dev/log local0 debug
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    log-tag     haproxy-metadata-proxy-0bf47658-57ad-4261-84b9-ab85a8c5f02f
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    user        root
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    group       root
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    maxconn     1024
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    pidfile     /var/lib/neutron/external/pids/0bf47658-57ad-4261-84b9-ab85a8c5f02f.pid.haproxy
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    daemon
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: defaults
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    log global
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    mode http
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    option httplog
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    option dontlognull
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    option http-server-close
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    option forwardfor
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    retries                 3
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    timeout http-request    30s
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    timeout connect         30s
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    timeout client          32s
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    timeout server          32s
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    timeout http-keep-alive 30s
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: listen listener
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    bind 169.254.169.254:80
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]:    http-request add-header X-OVN-Network-ID 0bf47658-57ad-4261-84b9-ab85a8c5f02f
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 10:01:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:01:56.783 92053 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f', 'env', 'PROCESS_TAG=haproxy-0bf47658-57ad-4261-84b9-ab85a8c5f02f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0bf47658-57ad-4261-84b9-ab85a8c5f02f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 10:01:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:57.081Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:57.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:57.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:57.090Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:57 compute-0 podman[200521]: 2025-10-09 10:01:57.09730657 +0000 UTC m=+0.041277749 container create 2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 10:01:57 compute-0 systemd[1]: Started libpod-conmon-2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d.scope.
Oct  9 10:01:57 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d578a72116b5ba3b407fe1d5df69bc32fea3524223b7bedb43229566e38e76/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:57 compute-0 podman[200521]: 2025-10-09 10:01:57.166198316 +0000 UTC m=+0.110169506 container init 2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 10:01:57 compute-0 podman[200521]: 2025-10-09 10:01:57.172261546 +0000 UTC m=+0.116232726 container start 2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 10:01:57 compute-0 podman[200521]: 2025-10-09 10:01:57.077873123 +0000 UTC m=+0.021844323 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 10:01:57 compute-0 neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f[200533]: [NOTICE]   (200552) : New worker (200564) forked
Oct  9 10:01:57 compute-0 neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f[200533]: [NOTICE]   (200552) : Loading success.
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.455 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.640 2 DEBUG nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.641 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760004117.6400108, eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.642 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] VM Started (Lifecycle Event)#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.646 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.649 2 INFO nova.virt.libvirt.driver [-] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Instance spawned successfully.#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.650 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 10:01:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:57.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.660 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.665 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.667 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.668 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.668 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.668 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.669 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.669 2 DEBUG nova.virt.libvirt.driver [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.686 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.686 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760004117.640205, eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.686 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] VM Paused (Lifecycle Event)#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.704 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.706 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760004117.6436572, eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.706 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] VM Resumed (Lifecycle Event)#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.714 2 INFO nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Took 4.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.714 2 DEBUG nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.720 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.722 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.744 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.757 2 INFO nova.compute.manager [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Took 5.66 seconds to build instance.#033[00m
Oct  9 10:01:57 compute-0 nova_compute[187439]: 2025-10-09 10:01:57.769 2 DEBUG oslo_concurrency.lockutils [None req-5ec0ad52-4544-487d-8372-e305da410a6f 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:01:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:58.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:58 compute-0 nova_compute[187439]: 2025-10-09 10:01:58.248 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:58 compute-0 nova_compute[187439]: 2025-10-09 10:01:58.251 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:58 compute-0 nova_compute[187439]: 2025-10-09 10:01:58.251 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:01:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v867: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  9 10:01:58 compute-0 nova_compute[187439]: 2025-10-09 10:01:58.846 2 DEBUG nova.compute.manager [req-ff840014-6f28-42cf-8f67-88efcc8c2be5 req-518dae05-efc6-46ec-9911-893d6dd81db0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received event network-vif-plugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:58 compute-0 nova_compute[187439]: 2025-10-09 10:01:58.847 2 DEBUG oslo_concurrency.lockutils [req-ff840014-6f28-42cf-8f67-88efcc8c2be5 req-518dae05-efc6-46ec-9911-893d6dd81db0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:58 compute-0 nova_compute[187439]: 2025-10-09 10:01:58.847 2 DEBUG oslo_concurrency.lockutils [req-ff840014-6f28-42cf-8f67-88efcc8c2be5 req-518dae05-efc6-46ec-9911-893d6dd81db0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:58 compute-0 nova_compute[187439]: 2025-10-09 10:01:58.848 2 DEBUG oslo_concurrency.lockutils [req-ff840014-6f28-42cf-8f67-88efcc8c2be5 req-518dae05-efc6-46ec-9911-893d6dd81db0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:58 compute-0 nova_compute[187439]: 2025-10-09 10:01:58.848 2 DEBUG nova.compute.manager [req-ff840014-6f28-42cf-8f67-88efcc8c2be5 req-518dae05-efc6-46ec-9911-893d6dd81db0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] No waiting events found dispatching network-vif-plugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:01:58 compute-0 nova_compute[187439]: 2025-10-09 10:01:58.848 2 WARNING nova.compute.manager [req-ff840014-6f28-42cf-8f67-88efcc8c2be5 req-518dae05-efc6-46ec-9911-893d6dd81db0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received unexpected event network-vif-plugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc for instance with vm_state active and task_state None.#033[00m
Oct  9 10:01:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:58.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:58.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:58.936Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:01:58.936Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:01:59 compute-0 nova_compute[187439]: 2025-10-09 10:01:59.250 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:59 compute-0 nova_compute[187439]: 2025-10-09 10:01:59.251 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:01:59 compute-0 nova_compute[187439]: 2025-10-09 10:01:59.251 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:01:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:01:59 compute-0 podman[200588]: 2025-10-09 10:01:59.364372812 +0000 UTC m=+0.072757867 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:01:59 compute-0 nova_compute[187439]: 2025-10-09 10:01:59.487 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:01:59 compute-0 nova_compute[187439]: 2025-10-09 10:01:59.488 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquired lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:01:59 compute-0 nova_compute[187439]: 2025-10-09 10:01:59.488 2 DEBUG nova.network.neutron [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  9 10:01:59 compute-0 nova_compute[187439]: 2025-10-09 10:01:59.488 2 DEBUG nova.objects.instance [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:01:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:01:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:59.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:59 compute-0 ovn_controller[83056]: 2025-10-09T10:01:59Z|00065|binding|INFO|Releasing lport a1fa08b5-bd0f-4c7f-b6cf-6b20fdb83e17 from this chassis (sb_readonly=0)
Oct  9 10:01:59 compute-0 NetworkManager[982]: <info>  [1760004119.6728] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct  9 10:01:59 compute-0 NetworkManager[982]: <info>  [1760004119.6733] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Oct  9 10:01:59 compute-0 nova_compute[187439]: 2025-10-09 10:01:59.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:59 compute-0 nova_compute[187439]: 2025-10-09 10:01:59.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:59 compute-0 ovn_controller[83056]: 2025-10-09T10:01:59Z|00066|binding|INFO|Releasing lport a1fa08b5-bd0f-4c7f-b6cf-6b20fdb83e17 from this chassis (sb_readonly=0)
Oct  9 10:01:59 compute-0 nova_compute[187439]: 2025-10-09 10:01:59.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:00.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:00 compute-0 nova_compute[187439]: 2025-10-09 10:02:00.438 2 DEBUG nova.network.neutron [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Updating instance_info_cache with network_info: [{"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:00 compute-0 nova_compute[187439]: 2025-10-09 10:02:00.452 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Releasing lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:02:00 compute-0 nova_compute[187439]: 2025-10-09 10:02:00.452 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  9 10:02:00 compute-0 nova_compute[187439]: 2025-10-09 10:02:00.452 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v868: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct  9 10:02:00 compute-0 nova_compute[187439]: 2025-10-09 10:02:00.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:00 compute-0 nova_compute[187439]: 2025-10-09 10:02:00.906 2 DEBUG nova.compute.manager [req-fee9e665-0a4b-4ca2-91c6-c8613c5230c8 req-803c9199-8319-4e3a-87a8-856fa4b19eae b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received event network-changed-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:00 compute-0 nova_compute[187439]: 2025-10-09 10:02:00.906 2 DEBUG nova.compute.manager [req-fee9e665-0a4b-4ca2-91c6-c8613c5230c8 req-803c9199-8319-4e3a-87a8-856fa4b19eae b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Refreshing instance network info cache due to event network-changed-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:02:00 compute-0 nova_compute[187439]: 2025-10-09 10:02:00.907 2 DEBUG oslo_concurrency.lockutils [req-fee9e665-0a4b-4ca2-91c6-c8613c5230c8 req-803c9199-8319-4e3a-87a8-856fa4b19eae b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:02:00 compute-0 nova_compute[187439]: 2025-10-09 10:02:00.907 2 DEBUG oslo_concurrency.lockutils [req-fee9e665-0a4b-4ca2-91c6-c8613c5230c8 req-803c9199-8319-4e3a-87a8-856fa4b19eae b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:02:00 compute-0 nova_compute[187439]: 2025-10-09 10:02:00.907 2 DEBUG nova.network.neutron [req-fee9e665-0a4b-4ca2-91c6-c8613c5230c8 req-803c9199-8319-4e3a-87a8-856fa4b19eae b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Refreshing network info cache for port b6adb2d1-94ef-4149-bd66-a2c5929ce9bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:02:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:01 compute-0 nova_compute[187439]: 2025-10-09 10:02:01.444 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:01.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:01 compute-0 nova_compute[187439]: 2025-10-09 10:02:01.682 2 DEBUG nova.network.neutron [req-fee9e665-0a4b-4ca2-91c6-c8613c5230c8 req-803c9199-8319-4e3a-87a8-856fa4b19eae b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Updated VIF entry in instance network info cache for port b6adb2d1-94ef-4149-bd66-a2c5929ce9bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:02:01 compute-0 nova_compute[187439]: 2025-10-09 10:02:01.683 2 DEBUG nova.network.neutron [req-fee9e665-0a4b-4ca2-91c6-c8613c5230c8 req-803c9199-8319-4e3a-87a8-856fa4b19eae b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Updating instance_info_cache with network_info: [{"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:01 compute-0 nova_compute[187439]: 2025-10-09 10:02:01.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:02.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:02] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:02:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:02] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:02:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v869: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  9 10:02:02 compute-0 nova_compute[187439]: 2025-10-09 10:02:02.940 2 DEBUG oslo_concurrency.lockutils [req-fee9e665-0a4b-4ca2-91c6-c8613c5230c8 req-803c9199-8319-4e3a-87a8-856fa4b19eae b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:02:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:03.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:04.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:02:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:02:04 compute-0 podman[200611]: 2025-10-09 10:02:04.63066861 +0000 UTC m=+0.069328982 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 10:02:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v870: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  9 10:02:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:05.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:05 compute-0 nova_compute[187439]: 2025-10-09 10:02:05.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:06.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v871: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  9 10:02:06 compute-0 nova_compute[187439]: 2025-10-09 10:02:06.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:07.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:07.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:07.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:07.091Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:07.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:08.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:08 compute-0 ovn_controller[83056]: 2025-10-09T10:02:08Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:0b:83 10.100.0.5
Oct  9 10:02:08 compute-0 ovn_controller[83056]: 2025-10-09T10:02:08Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:0b:83 10.100.0.5
Oct  9 10:02:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v872: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  9 10:02:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:08.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:08.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:08.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:08.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:09.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:10.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:10.116 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:10.117 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:10.117 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v873: 337 pgs: 337 active+clean; 89 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 148 KiB/s wr, 93 op/s
Oct  9 10:02:10 compute-0 nova_compute[187439]: 2025-10-09 10:02:10.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:11.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:11 compute-0 nova_compute[187439]: 2025-10-09 10:02:11.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:12.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:12] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Oct  9 10:02:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:12] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Oct  9 10:02:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v874: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct  9 10:02:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:13.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:14.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:14 compute-0 nova_compute[187439]: 2025-10-09 10:02:14.220 2 INFO nova.compute.manager [None req-4524eea2-08b2-420a-9ec7-c65957486f28 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Get console output#033[00m
Oct  9 10:02:14 compute-0 nova_compute[187439]: 2025-10-09 10:02:14.226 589 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 10:02:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v875: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  9 10:02:14 compute-0 podman[200663]: 2025-10-09 10:02:14.649089028 +0000 UTC m=+0.082712633 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  9 10:02:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:15.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:15 compute-0 nova_compute[187439]: 2025-10-09 10:02:15.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:16.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:16 compute-0 ovn_controller[83056]: 2025-10-09T10:02:16Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:0b:83 10.100.0.5
Oct  9 10:02:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v876: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  9 10:02:16 compute-0 nova_compute[187439]: 2025-10-09 10:02:16.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:17.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:17.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:17.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:17.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:17.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:18.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:18 compute-0 ovn_controller[83056]: 2025-10-09T10:02:18Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:0b:83 10.100.0.5
Oct  9 10:02:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v877: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  9 10:02:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:18.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:18.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:18.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:18.925Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.229 2 DEBUG nova.compute.manager [req-801a10f5-16ec-4159-bb30-228ec4f8a9f1 req-7bef881b-68a2-4104-ba6f-cd70938b518c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received event network-changed-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.230 2 DEBUG nova.compute.manager [req-801a10f5-16ec-4159-bb30-228ec4f8a9f1 req-7bef881b-68a2-4104-ba6f-cd70938b518c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Refreshing instance network info cache due to event network-changed-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.230 2 DEBUG oslo_concurrency.lockutils [req-801a10f5-16ec-4159-bb30-228ec4f8a9f1 req-7bef881b-68a2-4104-ba6f-cd70938b518c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.230 2 DEBUG oslo_concurrency.lockutils [req-801a10f5-16ec-4159-bb30-228ec4f8a9f1 req-7bef881b-68a2-4104-ba6f-cd70938b518c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.230 2 DEBUG nova.network.neutron [req-801a10f5-16ec-4159-bb30-228ec4f8a9f1 req-7bef881b-68a2-4104-ba6f-cd70938b518c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Refreshing network info cache for port b6adb2d1-94ef-4149-bd66-a2c5929ce9bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.298 2 DEBUG oslo_concurrency.lockutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.299 2 DEBUG oslo_concurrency.lockutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.299 2 DEBUG oslo_concurrency.lockutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.299 2 DEBUG oslo_concurrency.lockutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.299 2 DEBUG oslo_concurrency.lockutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.300 2 INFO nova.compute.manager [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Terminating instance#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.301 2 DEBUG nova.compute.manager [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 10:02:19 compute-0 kernel: tapb6adb2d1-94 (unregistering): left promiscuous mode
Oct  9 10:02:19 compute-0 NetworkManager[982]: <info>  [1760004139.3419] device (tapb6adb2d1-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:02:19 compute-0 ovn_controller[83056]: 2025-10-09T10:02:19Z|00067|binding|INFO|Releasing lport b6adb2d1-94ef-4149-bd66-a2c5929ce9bc from this chassis (sb_readonly=0)
Oct  9 10:02:19 compute-0 ovn_controller[83056]: 2025-10-09T10:02:19Z|00068|binding|INFO|Setting lport b6adb2d1-94ef-4149-bd66-a2c5929ce9bc down in Southbound
Oct  9 10:02:19 compute-0 ovn_controller[83056]: 2025-10-09T10:02:19Z|00069|binding|INFO|Removing iface tapb6adb2d1-94 ovn-installed in OVS
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.361 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:0b:83 10.100.0.5'], port_security=['fa:16:3e:2a:0b:83 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bf47658-57ad-4261-84b9-ab85a8c5f02f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5cf6ff21-3de7-4af5-a258-b8c74aff1da4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75d55572-a5af-4670-bdea-a06a1b91e1e6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], logical_port=b6adb2d1-94ef-4149-bd66-a2c5929ce9bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.362 92053 INFO neutron.agent.ovn.metadata.agent [-] Port b6adb2d1-94ef-4149-bd66-a2c5929ce9bc in datapath 0bf47658-57ad-4261-84b9-ab85a8c5f02f unbound from our chassis#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.363 92053 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bf47658-57ad-4261-84b9-ab85a8c5f02f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.364 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[76bcabb5-8522-453b-882d-221607ffefdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.366 92053 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f namespace which is not needed anymore#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:19 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct  9 10:02:19 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 12.136s CPU time.
Oct  9 10:02:19 compute-0 systemd-machined[143379]: Machine qemu-4-instance-0000000a terminated.
Oct  9 10:02:19 compute-0 neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f[200533]: [NOTICE]   (200552) : haproxy version is 2.8.14-c23fe91
Oct  9 10:02:19 compute-0 neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f[200533]: [NOTICE]   (200552) : path to executable is /usr/sbin/haproxy
Oct  9 10:02:19 compute-0 neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f[200533]: [WARNING]  (200552) : Exiting Master process...
Oct  9 10:02:19 compute-0 neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f[200533]: [WARNING]  (200552) : Exiting Master process...
Oct  9 10:02:19 compute-0 neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f[200533]: [ALERT]    (200552) : Current worker (200564) exited with code 143 (Terminated)
Oct  9 10:02:19 compute-0 neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f[200533]: [WARNING]  (200552) : All workers exited. Exiting... (0)
Oct  9 10:02:19 compute-0 systemd[1]: libpod-2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d.scope: Deactivated successfully.
Oct  9 10:02:19 compute-0 podman[200711]: 2025-10-09 10:02:19.486864979 +0000 UTC m=+0.039964362 container died 2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  9 10:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d-userdata-shm.mount: Deactivated successfully.
Oct  9 10:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-23d578a72116b5ba3b407fe1d5df69bc32fea3524223b7bedb43229566e38e76-merged.mount: Deactivated successfully.
Oct  9 10:02:19 compute-0 podman[200711]: 2025-10-09 10:02:19.518957672 +0000 UTC m=+0.072057055 container cleanup 2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.540 2 INFO nova.virt.libvirt.driver [-] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Instance destroyed successfully.#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.540 2 DEBUG nova.objects.instance [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:02:19 compute-0 systemd[1]: libpod-conmon-2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d.scope: Deactivated successfully.
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.561 2 DEBUG nova.virt.libvirt.vif [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T10:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-789438469',display_name='tempest-TestNetworkBasicOps-server-789438469',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-789438469',id=10,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz2csLjROGW6puiUsq/DK2eUbSDHWua4HHP2HjVlMnr0HbBhHui8Uqq2Xb0MZ1EWl06DycuVyFM5Y0YKQRB2ZA67Qs7u+8VWufSHktRa+mDSGpGFK4bz0s+bo0+BkmtQw==',key_name='tempest-TestNetworkBasicOps-1863507351',keypairs=<?>,launch_index=0,launched_at=2025-10-09T10:01:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-a0it4ou7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T10:01:57Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.562 2 DEBUG nova.network.os_vif_util [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.563 2 DEBUG nova.network.os_vif_util [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2a:0b:83,bridge_name='br-int',has_traffic_filtering=True,id=b6adb2d1-94ef-4149-bd66-a2c5929ce9bc,network=Network(0bf47658-57ad-4261-84b9-ab85a8c5f02f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6adb2d1-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.563 2 DEBUG os_vif [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:0b:83,bridge_name='br-int',has_traffic_filtering=True,id=b6adb2d1-94ef-4149-bd66-a2c5929ce9bc,network=Network(0bf47658-57ad-4261-84b9-ab85a8c5f02f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6adb2d1-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6adb2d1-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:02:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.573 2 INFO os_vif [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:0b:83,bridge_name='br-int',has_traffic_filtering=True,id=b6adb2d1-94ef-4149-bd66-a2c5929ce9bc,network=Network(0bf47658-57ad-4261-84b9-ab85a8c5f02f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6adb2d1-94')#033[00m
Oct  9 10:02:19 compute-0 podman[200740]: 2025-10-09 10:02:19.588797756 +0000 UTC m=+0.041726395 container remove 2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.594 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[8b94ff9e-fc52-4dcc-8f0d-2a49db12e20b]: (4, ('Thu Oct  9 10:02:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f (2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d)\n2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d\nThu Oct  9 10:02:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f (2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d)\n2d603d1dc7cb9d2f50903336f83e7cd0eb9c13720f0af21ce4856465419cf24d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.596 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[842e1f7c-4220-4c16-a9a5-ffde5de61e0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.597 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bf47658-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:19 compute-0 kernel: tap0bf47658-50: left promiscuous mode
Oct  9 10:02:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:02:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.621 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[80604bf2-6b84-4121-8456-681900797bb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.636 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[24849dce-b1ef-484a-8fb9-40ddf2856547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.637 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[747763e1-ef93-412a-a3bc-9b54d223154b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.653 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[66df401a-1268-430a-8192-a4f59f657815]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 180112, 'reachable_time': 23557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 200777, 'error': None, 'target': 'ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:02:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.656 92357 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0bf47658-57ad-4261-84b9-ab85a8c5f02f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 10:02:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:19.656 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[51b0ebc4-3925-405d-9e9d-f556490b76d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d0bf47658\x2d57ad\x2d4261\x2d84b9\x2dab85a8c5f02f.mount: Deactivated successfully.
Oct  9 10:02:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:19.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:02:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.728 2 DEBUG nova.compute.manager [req-06b090a3-eb6a-46b1-9042-d93cadb780a6 req-dc3d013c-1f09-444c-b8bc-03222ce3324c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received event network-vif-unplugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.729 2 DEBUG oslo_concurrency.lockutils [req-06b090a3-eb6a-46b1-9042-d93cadb780a6 req-dc3d013c-1f09-444c-b8bc-03222ce3324c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.729 2 DEBUG oslo_concurrency.lockutils [req-06b090a3-eb6a-46b1-9042-d93cadb780a6 req-dc3d013c-1f09-444c-b8bc-03222ce3324c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.729 2 DEBUG oslo_concurrency.lockutils [req-06b090a3-eb6a-46b1-9042-d93cadb780a6 req-dc3d013c-1f09-444c-b8bc-03222ce3324c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.730 2 DEBUG nova.compute.manager [req-06b090a3-eb6a-46b1-9042-d93cadb780a6 req-dc3d013c-1f09-444c-b8bc-03222ce3324c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] No waiting events found dispatching network-vif-unplugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.730 2 DEBUG nova.compute.manager [req-06b090a3-eb6a-46b1-9042-d93cadb780a6 req-dc3d013c-1f09-444c-b8bc-03222ce3324c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received event network-vif-unplugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.764 2 INFO nova.virt.libvirt.driver [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Deleting instance files /var/lib/nova/instances/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_del#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.765 2 INFO nova.virt.libvirt.driver [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Deletion of /var/lib/nova/instances/eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2_del complete#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.816 2 INFO nova.compute.manager [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Took 0.51 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.816 2 DEBUG oslo.service.loopingcall [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.817 2 DEBUG nova.compute.manager [-] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 10:02:19 compute-0 nova_compute[187439]: 2025-10-09 10:02:19.817 2 DEBUG nova.network.neutron [-] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 10:02:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:20.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.430 2 DEBUG nova.network.neutron [req-801a10f5-16ec-4159-bb30-228ec4f8a9f1 req-7bef881b-68a2-4104-ba6f-cd70938b518c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Updated VIF entry in instance network info cache for port b6adb2d1-94ef-4149-bd66-a2c5929ce9bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.431 2 DEBUG nova.network.neutron [req-801a10f5-16ec-4159-bb30-228ec4f8a9f1 req-7bef881b-68a2-4104-ba6f-cd70938b518c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Updating instance_info_cache with network_info: [{"id": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "address": "fa:16:3e:2a:0b:83", "network": {"id": "0bf47658-57ad-4261-84b9-ab85a8c5f02f", "bridge": "br-int", "label": "tempest-network-smoke--418352199", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6adb2d1-94", "ovs_interfaceid": "b6adb2d1-94ef-4149-bd66-a2c5929ce9bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.449 2 DEBUG oslo_concurrency.lockutils [req-801a10f5-16ec-4159-bb30-228ec4f8a9f1 req-7bef881b-68a2-4104-ba6f-cd70938b518c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.529 2 DEBUG nova.network.neutron [-] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.539 2 INFO nova.compute.manager [-] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Took 0.72 seconds to deallocate network for instance.#033[00m
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.568 2 DEBUG oslo_concurrency.lockutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.569 2 DEBUG oslo_concurrency.lockutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.607 2 DEBUG oslo_concurrency.processutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v878: 337 pgs: 337 active+clean; 114 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct  9 10:02:20 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:02:20 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1295465010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.971 2 DEBUG oslo_concurrency.processutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.977 2 DEBUG nova.compute.provider_tree [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:02:20 compute-0 nova_compute[187439]: 2025-10-09 10:02:20.989 2 DEBUG nova.scheduler.client.report [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.003 2 DEBUG oslo_concurrency.lockutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.020 2 INFO nova.scheduler.client.report [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2#033[00m
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.061 2 DEBUG oslo_concurrency.lockutils [None req-76980b2e-82ee-4ed7-ba56-4a7d48c7d0cf 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.306 2 DEBUG nova.compute.manager [req-6814b131-c233-4673-975f-2e6b81f718d7 req-461b9716-b56b-4fdd-9fa2-df1cb84641e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received event network-vif-deleted-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:21.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.808 2 DEBUG nova.compute.manager [req-5216c97e-7d76-4bd1-ad17-9d61c0dc2a02 req-7372dbed-fa94-49e6-9762-dde4d71fa154 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received event network-vif-plugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.808 2 DEBUG oslo_concurrency.lockutils [req-5216c97e-7d76-4bd1-ad17-9d61c0dc2a02 req-7372dbed-fa94-49e6-9762-dde4d71fa154 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.808 2 DEBUG oslo_concurrency.lockutils [req-5216c97e-7d76-4bd1-ad17-9d61c0dc2a02 req-7372dbed-fa94-49e6-9762-dde4d71fa154 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.808 2 DEBUG oslo_concurrency.lockutils [req-5216c97e-7d76-4bd1-ad17-9d61c0dc2a02 req-7372dbed-fa94-49e6-9762-dde4d71fa154 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.809 2 DEBUG nova.compute.manager [req-5216c97e-7d76-4bd1-ad17-9d61c0dc2a02 req-7372dbed-fa94-49e6-9762-dde4d71fa154 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] No waiting events found dispatching network-vif-plugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:02:21 compute-0 nova_compute[187439]: 2025-10-09 10:02:21.809 2 WARNING nova.compute.manager [req-5216c97e-7d76-4bd1-ad17-9d61c0dc2a02 req-7372dbed-fa94-49e6-9762-dde4d71fa154 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Received unexpected event network-vif-plugged-b6adb2d1-94ef-4149-bd66-a2c5929ce9bc for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:02:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:22.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:22] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Oct  9 10:02:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:22] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Oct  9 10:02:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v879: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 222 KiB/s rd, 2.0 MiB/s wr, 74 op/s
Oct  9 10:02:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:23.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:24.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:24 compute-0 nova_compute[187439]: 2025-10-09 10:02:24.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 29 op/s
Oct  9 10:02:25 compute-0 nova_compute[187439]: 2025-10-09 10:02:25.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:25 compute-0 nova_compute[187439]: 2025-10-09 10:02:25.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:25 compute-0 podman[200809]: 2025-10-09 10:02:25.601739768 +0000 UTC m=+0.043417227 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  9 10:02:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:25.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:26.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v881: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 30 op/s
Oct  9 10:02:26 compute-0 nova_compute[187439]: 2025-10-09 10:02:26.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:27.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:27.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:27.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:27.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:27.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:28.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v882: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct  9 10:02:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:28.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:28.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:28.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:28.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:29 compute-0 nova_compute[187439]: 2025-10-09 10:02:29.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:29 compute-0 podman[200830]: 2025-10-09 10:02:29.605729335 +0000 UTC m=+0.044009292 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  9 10:02:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:29.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:30.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v883: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct  9 10:02:31 compute-0 podman[200954]: 2025-10-09 10:02:31.211071603 +0000 UTC m=+0.048016181 container exec fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:02:31 compute-0 podman[200954]: 2025-10-09 10:02:31.305397008 +0000 UTC m=+0.142341596 container exec_died fb4b20d7f49fce1655b597253331cde3f0bd1a6f65055c0c9e7e61613f5652d6 (image=quay.io/ceph/ceph:v19, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 10:02:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:31 compute-0 podman[201050]: 2025-10-09 10:02:31.673262519 +0000 UTC m=+0.045418557 container exec 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:02:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:31.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:31 compute-0 podman[201072]: 2025-10-09 10:02:31.737248821 +0000 UTC m=+0.048051909 container exec_died 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:02:31 compute-0 podman[201050]: 2025-10-09 10:02:31.739980367 +0000 UTC m=+0.112136395 container exec_died 10161c66b361b66edfdbf4951997fb2366322c945e67f044787f85dddc54c994 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:02:31 compute-0 nova_compute[187439]: 2025-10-09 10:02:31.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:32 compute-0 podman[201135]: 2025-10-09 10:02:32.049517447 +0000 UTC m=+0.045047988 container exec 5c740331e43a547cef58f363bed860d932ba62ab932b4c8a13e2e8dac6839868 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:02:32 compute-0 podman[201135]: 2025-10-09 10:02:32.078392934 +0000 UTC m=+0.073923475 container exec_died 5c740331e43a547cef58f363bed860d932ba62ab932b4c8a13e2e8dac6839868 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:02:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:32.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:32 compute-0 podman[201194]: 2025-10-09 10:02:32.258267208 +0000 UTC m=+0.038416955 container exec d505ba96f4f8073a145fdc67466363156d038071ebcd8a8aeed53305dbe3584a (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 10:02:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:32] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Oct  9 10:02:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:32] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Oct  9 10:02:32 compute-0 podman[201194]: 2025-10-09 10:02:32.393694265 +0000 UTC m=+0.173844013 container exec_died d505ba96f4f8073a145fdc67466363156d038071ebcd8a8aeed53305dbe3584a (image=quay.io/ceph/grafana:10.4.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Oct  9 10:02:32 compute-0 podman[201252]: 2025-10-09 10:02:32.590565434 +0000 UTC m=+0.041717123 container exec 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 10:02:32 compute-0 podman[201252]: 2025-10-09 10:02:32.599349454 +0000 UTC m=+0.050501133 container exec_died 0c3906f36b8c5387e26601a1089154bdda03c8f87fbea5119420184790883682 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-0-kmcywb)
Oct  9 10:02:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v884: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 6.2 KiB/s wr, 10 op/s
Oct  9 10:02:32 compute-0 podman[201305]: 2025-10-09 10:02:32.790434812 +0000 UTC m=+0.042174235 container exec 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived)
Oct  9 10:02:32 compute-0 podman[201305]: 2025-10-09 10:02:32.830370658 +0000 UTC m=+0.082110071 container exec_died 45254cf9a2cd91037496049d12c8fdc604c0d669b06c7d761c3228749e14c043 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-0-uozjha, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9)
Oct  9 10:02:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:33 compute-0 podman[201383]: 2025-10-09 10:02:33.008311378 +0000 UTC m=+0.036030598 container exec ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:02:33 compute-0 podman[201383]: 2025-10-09 10:02:33.035510094 +0000 UTC m=+0.063229315 container exec_died ad7aeb5739d77e7c0db5bedadf9f04170fb86eb3e4620e2c374ce0ab10bde8f2 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:02:33 compute-0 podman[201429]: 2025-10-09 10:02:33.179252821 +0000 UTC m=+0.041985308 container exec 217ee5710fb39dcff3e6e8fa0b8ba75104b7bad4fc42becb1070f2d0166a1a7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 10:02:33 compute-0 podman[201429]: 2025-10-09 10:02:33.193475238 +0000 UTC m=+0.056207736 container exec_died 217ee5710fb39dcff3e6e8fa0b8ba75104b7bad4fc42becb1070f2d0166a1a7f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:02:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:02:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:02:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:33.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:33 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:02:33 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:02:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:02:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:02:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v885: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:02:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:02:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 10:02:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:02:33 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:02:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:02:33 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:02:33 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:02:33 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:02:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:34.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:34 compute-0 podman[201646]: 2025-10-09 10:02:34.459605593 +0000 UTC m=+0.039611536 container create b593b75dad42ef7e932bc317c93d1fbdb27186ac28d31ecc28466cab62b5864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 10:02:34 compute-0 systemd[1]: Started libpod-conmon-b593b75dad42ef7e932bc317c93d1fbdb27186ac28d31ecc28466cab62b5864e.scope.
Oct  9 10:02:34 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:02:34 compute-0 podman[201646]: 2025-10-09 10:02:34.525819712 +0000 UTC m=+0.105825675 container init b593b75dad42ef7e932bc317c93d1fbdb27186ac28d31ecc28466cab62b5864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_solomon, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 10:02:34 compute-0 podman[201646]: 2025-10-09 10:02:34.531519892 +0000 UTC m=+0.111525835 container start b593b75dad42ef7e932bc317c93d1fbdb27186ac28d31ecc28466cab62b5864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_solomon, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 10:02:34 compute-0 nova_compute[187439]: 2025-10-09 10:02:34.533 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004139.5317576, eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:02:34 compute-0 nova_compute[187439]: 2025-10-09 10:02:34.534 2 INFO nova.compute.manager [-] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] VM Stopped (Lifecycle Event)#033[00m
Oct  9 10:02:34 compute-0 podman[201646]: 2025-10-09 10:02:34.534607239 +0000 UTC m=+0.114613182 container attach b593b75dad42ef7e932bc317c93d1fbdb27186ac28d31ecc28466cab62b5864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 10:02:34 compute-0 fervent_solomon[201659]: 167 167
Oct  9 10:02:34 compute-0 systemd[1]: libpod-b593b75dad42ef7e932bc317c93d1fbdb27186ac28d31ecc28466cab62b5864e.scope: Deactivated successfully.
Oct  9 10:02:34 compute-0 podman[201646]: 2025-10-09 10:02:34.536352858 +0000 UTC m=+0.116358881 container died b593b75dad42ef7e932bc317c93d1fbdb27186ac28d31ecc28466cab62b5864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 10:02:34 compute-0 podman[201646]: 2025-10-09 10:02:34.445705663 +0000 UTC m=+0.025711626 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:02:34 compute-0 nova_compute[187439]: 2025-10-09 10:02:34.548 2 DEBUG nova.compute.manager [None req-9b5e85ec-a133-4ed2-80f7-ea202023d821 - - - - - -] [instance: eb8f5051-f5ff-456c-a130-f0d1d3d5c7e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:02:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0967185cbb36a016491f73a7a306c0bb5b178ec4d51334302f6d6ca09381f993-merged.mount: Deactivated successfully.
Oct  9 10:02:34 compute-0 podman[201646]: 2025-10-09 10:02:34.555161066 +0000 UTC m=+0.135167008 container remove b593b75dad42ef7e932bc317c93d1fbdb27186ac28d31ecc28466cab62b5864e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_solomon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:02:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:02:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:02:34 compute-0 nova_compute[187439]: 2025-10-09 10:02:34.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:34 compute-0 systemd[1]: libpod-conmon-b593b75dad42ef7e932bc317c93d1fbdb27186ac28d31ecc28466cab62b5864e.scope: Deactivated successfully.
Oct  9 10:02:34 compute-0 podman[201683]: 2025-10-09 10:02:34.694291941 +0000 UTC m=+0.037362659 container create 6961ee3e3b0bde94b6411c45eced4569a5cc889218d74d2357e06db8909794cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banzai, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:02:34 compute-0 systemd[1]: Started libpod-conmon-6961ee3e3b0bde94b6411c45eced4569a5cc889218d74d2357e06db8909794cb.scope.
Oct  9 10:02:34 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a7be6c8f2925073a372d8b894cd41cf6a1584bde158442452ec2612ec8a972/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a7be6c8f2925073a372d8b894cd41cf6a1584bde158442452ec2612ec8a972/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a7be6c8f2925073a372d8b894cd41cf6a1584bde158442452ec2612ec8a972/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a7be6c8f2925073a372d8b894cd41cf6a1584bde158442452ec2612ec8a972/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a7be6c8f2925073a372d8b894cd41cf6a1584bde158442452ec2612ec8a972/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:34 compute-0 podman[201683]: 2025-10-09 10:02:34.763612523 +0000 UTC m=+0.106683250 container init 6961ee3e3b0bde94b6411c45eced4569a5cc889218d74d2357e06db8909794cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banzai, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 10:02:34 compute-0 podman[201683]: 2025-10-09 10:02:34.772314269 +0000 UTC m=+0.115384976 container start 6961ee3e3b0bde94b6411c45eced4569a5cc889218d74d2357e06db8909794cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banzai, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:02:34 compute-0 podman[201683]: 2025-10-09 10:02:34.677924192 +0000 UTC m=+0.020994919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:02:34 compute-0 podman[201683]: 2025-10-09 10:02:34.773638995 +0000 UTC m=+0.116709702 container attach 6961ee3e3b0bde94b6411c45eced4569a5cc889218d74d2357e06db8909794cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:02:34 compute-0 podman[201694]: 2025-10-09 10:02:34.782909012 +0000 UTC m=+0.057533985 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  9 10:02:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:02:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:34 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:02:35 compute-0 cool_banzai[201702]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:02:35 compute-0 cool_banzai[201702]: --> All data devices are unavailable
Oct  9 10:02:35 compute-0 systemd[1]: libpod-6961ee3e3b0bde94b6411c45eced4569a5cc889218d74d2357e06db8909794cb.scope: Deactivated successfully.
Oct  9 10:02:35 compute-0 podman[201683]: 2025-10-09 10:02:35.061118696 +0000 UTC m=+0.404189402 container died 6961ee3e3b0bde94b6411c45eced4569a5cc889218d74d2357e06db8909794cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banzai, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:02:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-36a7be6c8f2925073a372d8b894cd41cf6a1584bde158442452ec2612ec8a972-merged.mount: Deactivated successfully.
Oct  9 10:02:35 compute-0 podman[201683]: 2025-10-09 10:02:35.089345941 +0000 UTC m=+0.432416648 container remove 6961ee3e3b0bde94b6411c45eced4569a5cc889218d74d2357e06db8909794cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:02:35 compute-0 systemd[1]: libpod-conmon-6961ee3e3b0bde94b6411c45eced4569a5cc889218d74d2357e06db8909794cb.scope: Deactivated successfully.
Oct  9 10:02:35 compute-0 podman[201818]: 2025-10-09 10:02:35.542592675 +0000 UTC m=+0.031479423 container create 936ce9c3142f9ac48593f0e720e31ff669fe3a71aa120e5a4d7bb189a688151d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendeleev, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:02:35 compute-0 systemd[1]: Started libpod-conmon-936ce9c3142f9ac48593f0e720e31ff669fe3a71aa120e5a4d7bb189a688151d.scope.
Oct  9 10:02:35 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:02:35 compute-0 podman[201818]: 2025-10-09 10:02:35.59798333 +0000 UTC m=+0.086870080 container init 936ce9c3142f9ac48593f0e720e31ff669fe3a71aa120e5a4d7bb189a688151d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 10:02:35 compute-0 podman[201818]: 2025-10-09 10:02:35.603033024 +0000 UTC m=+0.091919773 container start 936ce9c3142f9ac48593f0e720e31ff669fe3a71aa120e5a4d7bb189a688151d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:02:35 compute-0 podman[201818]: 2025-10-09 10:02:35.606230568 +0000 UTC m=+0.095117318 container attach 936ce9c3142f9ac48593f0e720e31ff669fe3a71aa120e5a4d7bb189a688151d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 10:02:35 compute-0 naughty_mendeleev[201830]: 167 167
Oct  9 10:02:35 compute-0 systemd[1]: libpod-936ce9c3142f9ac48593f0e720e31ff669fe3a71aa120e5a4d7bb189a688151d.scope: Deactivated successfully.
Oct  9 10:02:35 compute-0 podman[201818]: 2025-10-09 10:02:35.608013158 +0000 UTC m=+0.096899907 container died 936ce9c3142f9ac48593f0e720e31ff669fe3a71aa120e5a4d7bb189a688151d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Oct  9 10:02:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffef2cf9a74f03ca586db685b5f5bf3782dc97297e633857ca257ef45fe99983-merged.mount: Deactivated successfully.
Oct  9 10:02:35 compute-0 podman[201818]: 2025-10-09 10:02:35.529694743 +0000 UTC m=+0.018581492 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:02:35 compute-0 podman[201818]: 2025-10-09 10:02:35.631954087 +0000 UTC m=+0.120840835 container remove 936ce9c3142f9ac48593f0e720e31ff669fe3a71aa120e5a4d7bb189a688151d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 10:02:35 compute-0 systemd[1]: libpod-conmon-936ce9c3142f9ac48593f0e720e31ff669fe3a71aa120e5a4d7bb189a688151d.scope: Deactivated successfully.
Oct  9 10:02:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:35 compute-0 podman[201853]: 2025-10-09 10:02:35.762805473 +0000 UTC m=+0.037212315 container create 615a46c55176514c738f0685e3b8d5059bff1ad0db5904336f163703304d97f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_murdock, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:02:35 compute-0 systemd[1]: Started libpod-conmon-615a46c55176514c738f0685e3b8d5059bff1ad0db5904336f163703304d97f1.scope.
Oct  9 10:02:35 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:02:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73940158aa8964c752b8a4f0bc8e73b42df15f23144d8332224eecaa58c6eaf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73940158aa8964c752b8a4f0bc8e73b42df15f23144d8332224eecaa58c6eaf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73940158aa8964c752b8a4f0bc8e73b42df15f23144d8332224eecaa58c6eaf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73940158aa8964c752b8a4f0bc8e73b42df15f23144d8332224eecaa58c6eaf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:35 compute-0 podman[201853]: 2025-10-09 10:02:35.821648173 +0000 UTC m=+0.096055015 container init 615a46c55176514c738f0685e3b8d5059bff1ad0db5904336f163703304d97f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:02:35 compute-0 podman[201853]: 2025-10-09 10:02:35.826571969 +0000 UTC m=+0.100978801 container start 615a46c55176514c738f0685e3b8d5059bff1ad0db5904336f163703304d97f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_murdock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 10:02:35 compute-0 podman[201853]: 2025-10-09 10:02:35.827680478 +0000 UTC m=+0.102087310 container attach 615a46c55176514c738f0685e3b8d5059bff1ad0db5904336f163703304d97f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 10:02:35 compute-0 podman[201853]: 2025-10-09 10:02:35.74843721 +0000 UTC m=+0.022844052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:02:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v886: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct  9 10:02:36 compute-0 priceless_murdock[201866]: {
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:    "1": [
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:        {
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "devices": [
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "/dev/loop3"
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            ],
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "lv_name": "ceph_lv0",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "lv_size": "21470642176",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "name": "ceph_lv0",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "tags": {
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.cluster_name": "ceph",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.crush_device_class": "",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.encrypted": "0",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.osd_id": "1",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.type": "block",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.vdo": "0",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:                "ceph.with_tpm": "0"
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            },
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "type": "block",
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:            "vg_name": "ceph_vg0"
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:        }
Oct  9 10:02:36 compute-0 priceless_murdock[201866]:    ]
Oct  9 10:02:36 compute-0 priceless_murdock[201866]: }
Oct  9 10:02:36 compute-0 systemd[1]: libpod-615a46c55176514c738f0685e3b8d5059bff1ad0db5904336f163703304d97f1.scope: Deactivated successfully.
Oct  9 10:02:36 compute-0 conmon[201866]: conmon 615a46c55176514c738f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-615a46c55176514c738f0685e3b8d5059bff1ad0db5904336f163703304d97f1.scope/container/memory.events
Oct  9 10:02:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:36.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:36 compute-0 podman[201876]: 2025-10-09 10:02:36.127561786 +0000 UTC m=+0.020499476 container died 615a46c55176514c738f0685e3b8d5059bff1ad0db5904336f163703304d97f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 10:02:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-73940158aa8964c752b8a4f0bc8e73b42df15f23144d8332224eecaa58c6eaf7-merged.mount: Deactivated successfully.
Oct  9 10:02:36 compute-0 podman[201876]: 2025-10-09 10:02:36.151730323 +0000 UTC m=+0.044668013 container remove 615a46c55176514c738f0685e3b8d5059bff1ad0db5904336f163703304d97f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:02:36 compute-0 systemd[1]: libpod-conmon-615a46c55176514c738f0685e3b8d5059bff1ad0db5904336f163703304d97f1.scope: Deactivated successfully.
Oct  9 10:02:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:36 compute-0 podman[201970]: 2025-10-09 10:02:36.664455634 +0000 UTC m=+0.037105455 container create 25bd8357c27f057cd162af87b173144a22b40c66f9d079e8615b85c2962d8137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:02:36 compute-0 systemd[1]: Started libpod-conmon-25bd8357c27f057cd162af87b173144a22b40c66f9d079e8615b85c2962d8137.scope.
Oct  9 10:02:36 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:02:36 compute-0 podman[201970]: 2025-10-09 10:02:36.720863446 +0000 UTC m=+0.093513267 container init 25bd8357c27f057cd162af87b173144a22b40c66f9d079e8615b85c2962d8137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_khorana, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:02:36 compute-0 podman[201970]: 2025-10-09 10:02:36.727651507 +0000 UTC m=+0.100301317 container start 25bd8357c27f057cd162af87b173144a22b40c66f9d079e8615b85c2962d8137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 10:02:36 compute-0 podman[201970]: 2025-10-09 10:02:36.72877805 +0000 UTC m=+0.101427869 container attach 25bd8357c27f057cd162af87b173144a22b40c66f9d079e8615b85c2962d8137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 10:02:36 compute-0 optimistic_khorana[201983]: 167 167
Oct  9 10:02:36 compute-0 systemd[1]: libpod-25bd8357c27f057cd162af87b173144a22b40c66f9d079e8615b85c2962d8137.scope: Deactivated successfully.
Oct  9 10:02:36 compute-0 conmon[201983]: conmon 25bd8357c27f057cd162 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25bd8357c27f057cd162af87b173144a22b40c66f9d079e8615b85c2962d8137.scope/container/memory.events
Oct  9 10:02:36 compute-0 podman[201970]: 2025-10-09 10:02:36.735174341 +0000 UTC m=+0.107824161 container died 25bd8357c27f057cd162af87b173144a22b40c66f9d079e8615b85c2962d8137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 10:02:36 compute-0 podman[201970]: 2025-10-09 10:02:36.648724355 +0000 UTC m=+0.021374195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:02:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dfc396511228034ed97075293cc342d6e2205a6174c8a63d5bcca9493552153-merged.mount: Deactivated successfully.
Oct  9 10:02:36 compute-0 podman[201970]: 2025-10-09 10:02:36.755562957 +0000 UTC m=+0.128212776 container remove 25bd8357c27f057cd162af87b173144a22b40c66f9d079e8615b85c2962d8137 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_khorana, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:02:36 compute-0 systemd[1]: libpod-conmon-25bd8357c27f057cd162af87b173144a22b40c66f9d079e8615b85c2962d8137.scope: Deactivated successfully.
Oct  9 10:02:36 compute-0 nova_compute[187439]: 2025-10-09 10:02:36.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:36 compute-0 podman[202005]: 2025-10-09 10:02:36.901425566 +0000 UTC m=+0.035922524 container create 5765df66d96135cb2e1d4660abe2d374808683e5890b816bedb67a4496f88c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 10:02:36 compute-0 systemd[1]: Started libpod-conmon-5765df66d96135cb2e1d4660abe2d374808683e5890b816bedb67a4496f88c62.scope.
Oct  9 10:02:36 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1694472ed9edccf365825b8273c469810ef44f04a42bda0f5693872dc54979e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1694472ed9edccf365825b8273c469810ef44f04a42bda0f5693872dc54979e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1694472ed9edccf365825b8273c469810ef44f04a42bda0f5693872dc54979e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1694472ed9edccf365825b8273c469810ef44f04a42bda0f5693872dc54979e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:36 compute-0 podman[202005]: 2025-10-09 10:02:36.974985306 +0000 UTC m=+0.109482264 container init 5765df66d96135cb2e1d4660abe2d374808683e5890b816bedb67a4496f88c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 10:02:36 compute-0 podman[202005]: 2025-10-09 10:02:36.980621746 +0000 UTC m=+0.115118714 container start 5765df66d96135cb2e1d4660abe2d374808683e5890b816bedb67a4496f88c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 10:02:36 compute-0 podman[202005]: 2025-10-09 10:02:36.981798323 +0000 UTC m=+0.116295291 container attach 5765df66d96135cb2e1d4660abe2d374808683e5890b816bedb67a4496f88c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:02:36 compute-0 podman[202005]: 2025-10-09 10:02:36.888177334 +0000 UTC m=+0.022674323 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:02:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:37.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:37.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:37.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:37.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:37 compute-0 awesome_goldberg[202018]: {}
Oct  9 10:02:37 compute-0 lvm[202095]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:02:37 compute-0 lvm[202095]: VG ceph_vg0 finished
Oct  9 10:02:37 compute-0 systemd[1]: libpod-5765df66d96135cb2e1d4660abe2d374808683e5890b816bedb67a4496f88c62.scope: Deactivated successfully.
Oct  9 10:02:37 compute-0 lvm[202096]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:02:37 compute-0 lvm[202096]: VG ceph_vg0 finished
Oct  9 10:02:37 compute-0 podman[202097]: 2025-10-09 10:02:37.623926862 +0000 UTC m=+0.024393281 container died 5765df66d96135cb2e1d4660abe2d374808683e5890b816bedb67a4496f88c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1694472ed9edccf365825b8273c469810ef44f04a42bda0f5693872dc54979e-merged.mount: Deactivated successfully.
Oct  9 10:02:37 compute-0 podman[202097]: 2025-10-09 10:02:37.651472964 +0000 UTC m=+0.051939383 container remove 5765df66d96135cb2e1d4660abe2d374808683e5890b816bedb67a4496f88c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_goldberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:02:37 compute-0 systemd[1]: libpod-conmon-5765df66d96135cb2e1d4660abe2d374808683e5890b816bedb67a4496f88c62.scope: Deactivated successfully.
Oct  9 10:02:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:02:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:37.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:37 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:02:37 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v887: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:02:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:38.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:38 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:38 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:38.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:38.925Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:38.926Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:38.928Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:39 compute-0 nova_compute[187439]: 2025-10-09 10:02:39.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:39.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v888: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:02:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:40.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:41.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:41 compute-0 nova_compute[187439]: 2025-10-09 10:02:41.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v889: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.9 MiB/s wr, 33 op/s
Oct  9 10:02:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:42.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:42] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:02:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:42] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:02:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:43.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v890: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.9 MiB/s wr, 33 op/s
Oct  9 10:02:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:44.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:44 compute-0 nova_compute[187439]: 2025-10-09 10:02:44.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:45 compute-0 podman[202142]: 2025-10-09 10:02:45.640805702 +0000 UTC m=+0.078118008 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  9 10:02:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v891: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  9 10:02:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:46.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:46 compute-0 nova_compute[187439]: 2025-10-09 10:02:46.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:47.087Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:47.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:47.094Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:47.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:47.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:47 compute-0 nova_compute[187439]: 2025-10-09 10:02:47.838 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:47 compute-0 nova_compute[187439]: 2025-10-09 10:02:47.839 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:47 compute-0 nova_compute[187439]: 2025-10-09 10:02:47.849 2 DEBUG nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 10:02:47 compute-0 nova_compute[187439]: 2025-10-09 10:02:47.908 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:47 compute-0 nova_compute[187439]: 2025-10-09 10:02:47.908 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:47 compute-0 nova_compute[187439]: 2025-10-09 10:02:47.913 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 10:02:47 compute-0 nova_compute[187439]: 2025-10-09 10:02:47.913 2 INFO nova.compute.claims [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  9 10:02:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v892: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  9 10:02:47 compute-0 nova_compute[187439]: 2025-10-09 10:02:47.986 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:48.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.354 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.359 2 DEBUG nova.compute.provider_tree [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.372 2 DEBUG nova.scheduler.client.report [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.385 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.387 2 DEBUG nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.428 2 DEBUG nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.428 2 DEBUG nova.network.neutron [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.453 2 INFO nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.464 2 DEBUG nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.544 2 DEBUG nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.545 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.545 2 INFO nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Creating image(s)#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.563 2 DEBUG nova.storage.rbd_utils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.580 2 DEBUG nova.storage.rbd_utils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.598 2 DEBUG nova.storage.rbd_utils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.600 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.620 2 DEBUG nova.policy [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.653 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.654 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.654 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.655 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.674 2 DEBUG nova.storage.rbd_utils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.677 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.812 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.863 2 DEBUG nova.storage.rbd_utils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 10:02:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:48.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:48.926Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:48.926Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:48.926Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.926 2 DEBUG nova.objects.instance [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid d7ef9240-faf8-4f56-b3ac-7a3e0830de38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.938 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.938 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Ensure instance console log exists: /var/lib/nova/instances/d7ef9240-faf8-4f56-b3ac-7a3e0830de38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.938 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.939 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:48 compute-0 nova_compute[187439]: 2025-10-09 10:02:48.939 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:49 compute-0 nova_compute[187439]: 2025-10-09 10:02:49.522 2 DEBUG nova.network.neutron [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Successfully created port: 22bc2188-3978-476c-a2b1-0107e1eeb4cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:02:49
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.nfs', 'images', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root']
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:02:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:02:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:02:49 compute-0 nova_compute[187439]: 2025-10-09 10:02:49.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:02:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:49.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:02:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v893: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  9 10:02:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:50.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:50 compute-0 nova_compute[187439]: 2025-10-09 10:02:50.392 2 DEBUG nova.network.neutron [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Successfully updated port: 22bc2188-3978-476c-a2b1-0107e1eeb4cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 10:02:50 compute-0 nova_compute[187439]: 2025-10-09 10:02:50.404 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:02:50 compute-0 nova_compute[187439]: 2025-10-09 10:02:50.404 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:02:50 compute-0 nova_compute[187439]: 2025-10-09 10:02:50.405 2 DEBUG nova.network.neutron [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 10:02:50 compute-0 nova_compute[187439]: 2025-10-09 10:02:50.466 2 DEBUG nova.compute.manager [req-779c557c-761e-41f2-afc7-d6c9d0874c42 req-6dd63a58-86fb-4e2f-ada1-b514f450c909 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received event network-changed-22bc2188-3978-476c-a2b1-0107e1eeb4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:50 compute-0 nova_compute[187439]: 2025-10-09 10:02:50.466 2 DEBUG nova.compute.manager [req-779c557c-761e-41f2-afc7-d6c9d0874c42 req-6dd63a58-86fb-4e2f-ada1-b514f450c909 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Refreshing instance network info cache due to event network-changed-22bc2188-3978-476c-a2b1-0107e1eeb4cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:02:50 compute-0 nova_compute[187439]: 2025-10-09 10:02:50.467 2 DEBUG oslo_concurrency.lockutils [req-779c557c-761e-41f2-afc7-d6c9d0874c42 req-6dd63a58-86fb-4e2f-ada1-b514f450c909 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:02:50 compute-0 nova_compute[187439]: 2025-10-09 10:02:50.524 2 DEBUG nova.network.neutron [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.250 2 DEBUG nova.network.neutron [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Updating instance_info_cache with network_info: [{"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.261 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.262 2 DEBUG nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Instance network_info: |[{"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.262 2 DEBUG oslo_concurrency.lockutils [req-779c557c-761e-41f2-afc7-d6c9d0874c42 req-6dd63a58-86fb-4e2f-ada1-b514f450c909 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.262 2 DEBUG nova.network.neutron [req-779c557c-761e-41f2-afc7-d6c9d0874c42 req-6dd63a58-86fb-4e2f-ada1-b514f450c909 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Refreshing network info cache for port 22bc2188-3978-476c-a2b1-0107e1eeb4cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.264 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Start _get_guest_xml network_info=[{"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'encryption_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'encrypted': False, 'guest_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.267 2 WARNING nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.270 2 DEBUG nova.virt.libvirt.host [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.270 2 DEBUG nova.virt.libvirt.host [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.274 2 DEBUG nova.virt.libvirt.host [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.274 2 DEBUG nova.virt.libvirt.host [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.274 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.275 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.275 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.275 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.275 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.275 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.275 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.276 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.276 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.276 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.276 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.276 2 DEBUG nova.virt.hardware [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.278 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:02:51 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2931780533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.643 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.664 2 DEBUG nova.storage.rbd_utils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.667 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:51.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:51 compute-0 nova_compute[187439]: 2025-10-09 10:02:51.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v894: 337 pgs: 337 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 130 op/s
Oct  9 10:02:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:02:52 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1952398429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.033 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.035 2 DEBUG nova.virt.libvirt.vif [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:02:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1006874931',display_name='tempest-TestNetworkBasicOps-server-1006874931',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1006874931',id=12,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDdjFIQj4FOLYlCs6zljk6wKa9pI2ISqD9Sb6SVhatdV3gRq8sNB/xPPzWRU7uKoU0bIS8yl5sqGcf3FjrbOxRvx3JpBVSln6lZ2WQLyfYlAFw2+zDNMalVPKJfvSSdSA==',key_name='tempest-TestNetworkBasicOps-17187709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-zj7fuszu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:02:48Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=d7ef9240-faf8-4f56-b3ac-7a3e0830de38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.035 2 DEBUG nova.network.os_vif_util [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.036 2 DEBUG nova.network.os_vif_util [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:ec:98,bridge_name='br-int',has_traffic_filtering=True,id=22bc2188-3978-476c-a2b1-0107e1eeb4cd,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22bc2188-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.038 2 DEBUG nova.objects.instance [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid d7ef9240-faf8-4f56-b3ac-7a3e0830de38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.050 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] End _get_guest_xml xml=<domain type="kvm">
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <uuid>d7ef9240-faf8-4f56-b3ac-7a3e0830de38</uuid>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <name>instance-0000000c</name>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <memory>131072</memory>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <vcpu>1</vcpu>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <metadata>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <nova:name>tempest-TestNetworkBasicOps-server-1006874931</nova:name>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <nova:creationTime>2025-10-09 10:02:51</nova:creationTime>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <nova:flavor name="m1.nano">
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <nova:memory>128</nova:memory>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <nova:disk>1</nova:disk>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <nova:swap>0</nova:swap>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <nova:vcpus>1</nova:vcpus>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      </nova:flavor>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <nova:owner>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      </nova:owner>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <nova:ports>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <nova:port uuid="22bc2188-3978-476c-a2b1-0107e1eeb4cd">
Oct  9 10:02:52 compute-0 nova_compute[187439]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        </nova:port>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      </nova:ports>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    </nova:instance>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  </metadata>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <sysinfo type="smbios">
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <system>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <entry name="manufacturer">RDO</entry>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <entry name="product">OpenStack Compute</entry>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <entry name="serial">d7ef9240-faf8-4f56-b3ac-7a3e0830de38</entry>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <entry name="uuid">d7ef9240-faf8-4f56-b3ac-7a3e0830de38</entry>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <entry name="family">Virtual Machine</entry>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    </system>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  </sysinfo>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <os>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <boot dev="hd"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <smbios mode="sysinfo"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  </os>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <features>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <acpi/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <apic/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <vmcoreinfo/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  </features>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <clock offset="utc">
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <timer name="hpet" present="no"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  </clock>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <cpu mode="host-model" match="exact">
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  </cpu>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  <devices>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <disk type="network" device="disk">
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <driver type="raw" cache="none"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <source protocol="rbd" name="vms/d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk">
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      </source>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <auth username="openstack">
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      </auth>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <target dev="vda" bus="virtio"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    </disk>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <disk type="network" device="cdrom">
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <driver type="raw" cache="none"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <source protocol="rbd" name="vms/d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk.config">
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      </source>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <auth username="openstack">
Oct  9 10:02:52 compute-0 nova_compute[187439]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      </auth>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <target dev="sda" bus="sata"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    </disk>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <interface type="ethernet">
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <mac address="fa:16:3e:0b:ec:98"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <model type="virtio"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <mtu size="1442"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <target dev="tap22bc2188-39"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    </interface>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <serial type="pty">
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <log file="/var/lib/nova/instances/d7ef9240-faf8-4f56-b3ac-7a3e0830de38/console.log" append="off"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    </serial>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <video>
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <model type="virtio"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    </video>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <input type="tablet" bus="usb"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <rng model="virtio">
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <backend model="random">/dev/urandom</backend>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    </rng>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <controller type="usb" index="0"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    <memballoon model="virtio">
Oct  9 10:02:52 compute-0 nova_compute[187439]:      <stats period="10"/>
Oct  9 10:02:52 compute-0 nova_compute[187439]:    </memballoon>
Oct  9 10:02:52 compute-0 nova_compute[187439]:  </devices>
Oct  9 10:02:52 compute-0 nova_compute[187439]: </domain>
Oct  9 10:02:52 compute-0 nova_compute[187439]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.051 2 DEBUG nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Preparing to wait for external event network-vif-plugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.051 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.051 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.052 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.052 2 DEBUG nova.virt.libvirt.vif [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:02:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1006874931',display_name='tempest-TestNetworkBasicOps-server-1006874931',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1006874931',id=12,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDdjFIQj4FOLYlCs6zljk6wKa9pI2ISqD9Sb6SVhatdV3gRq8sNB/xPPzWRU7uKoU0bIS8yl5sqGcf3FjrbOxRvx3JpBVSln6lZ2WQLyfYlAFw2+zDNMalVPKJfvSSdSA==',key_name='tempest-TestNetworkBasicOps-17187709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-zj7fuszu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:02:48Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=d7ef9240-faf8-4f56-b3ac-7a3e0830de38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.052 2 DEBUG nova.network.os_vif_util [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.053 2 DEBUG nova.network.os_vif_util [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:ec:98,bridge_name='br-int',has_traffic_filtering=True,id=22bc2188-3978-476c-a2b1-0107e1eeb4cd,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22bc2188-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.053 2 DEBUG os_vif [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:ec:98,bridge_name='br-int',has_traffic_filtering=True,id=22bc2188-3978-476c-a2b1-0107e1eeb4cd,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22bc2188-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.054 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.054 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.060 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22bc2188-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22bc2188-39, col_values=(('external_ids', {'iface-id': '22bc2188-3978-476c-a2b1-0107e1eeb4cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:ec:98', 'vm-uuid': 'd7ef9240-faf8-4f56-b3ac-7a3e0830de38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 NetworkManager[982]: <info>  [1760004172.0630] manager: (tap22bc2188-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.068 2 INFO os_vif [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:ec:98,bridge_name='br-int',has_traffic_filtering=True,id=22bc2188-3978-476c-a2b1-0107e1eeb4cd,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22bc2188-39')#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.100 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.100 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.101 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:0b:ec:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.101 2 INFO nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Using config drive#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.118 2 DEBUG nova.storage.rbd_utils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:52.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.147 2 DEBUG nova.network.neutron [req-779c557c-761e-41f2-afc7-d6c9d0874c42 req-6dd63a58-86fb-4e2f-ada1-b514f450c909 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Updated VIF entry in instance network info cache for port 22bc2188-3978-476c-a2b1-0107e1eeb4cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.147 2 DEBUG nova.network.neutron [req-779c557c-761e-41f2-afc7-d6c9d0874c42 req-6dd63a58-86fb-4e2f-ada1-b514f450c909 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Updating instance_info_cache with network_info: [{"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.164 2 DEBUG oslo_concurrency.lockutils [req-779c557c-761e-41f2-afc7-d6c9d0874c42 req-6dd63a58-86fb-4e2f-ada1-b514f450c909 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:02:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:52] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Oct  9 10:02:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:02:52] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.343 2 INFO nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Creating config drive at /var/lib/nova/instances/d7ef9240-faf8-4f56-b3ac-7a3e0830de38/disk.config#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.348 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7ef9240-faf8-4f56-b3ac-7a3e0830de38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_v5_1uf8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.477 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7ef9240-faf8-4f56-b3ac-7a3e0830de38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_v5_1uf8" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.501 2 DEBUG nova.storage.rbd_utils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.504 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7ef9240-faf8-4f56-b3ac-7a3e0830de38/disk.config d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.602 2 DEBUG oslo_concurrency.processutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7ef9240-faf8-4f56-b3ac-7a3e0830de38/disk.config d7ef9240-faf8-4f56-b3ac-7a3e0830de38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.603 2 INFO nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Deleting local config drive /var/lib/nova/instances/d7ef9240-faf8-4f56-b3ac-7a3e0830de38/disk.config because it was imported into RBD.#033[00m
Oct  9 10:02:52 compute-0 kernel: tap22bc2188-39: entered promiscuous mode
Oct  9 10:02:52 compute-0 NetworkManager[982]: <info>  [1760004172.6421] manager: (tap22bc2188-39): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 ovn_controller[83056]: 2025-10-09T10:02:52Z|00070|binding|INFO|Claiming lport 22bc2188-3978-476c-a2b1-0107e1eeb4cd for this chassis.
Oct  9 10:02:52 compute-0 ovn_controller[83056]: 2025-10-09T10:02:52Z|00071|binding|INFO|22bc2188-3978-476c-a2b1-0107e1eeb4cd: Claiming fa:16:3e:0b:ec:98 10.100.0.13
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 NetworkManager[982]: <info>  [1760004172.6546] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct  9 10:02:52 compute-0 NetworkManager[982]: <info>  [1760004172.6553] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.655 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:ec:98 10.100.0.13'], port_security=['fa:16:3e:0b:ec:98 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7ef9240-faf8-4f56-b3ac-7a3e0830de38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e36da7d-913d-4101-a7c2-e1698abf35be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8dd992f-cc21-4be4-9d79-7a1b6fb1cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49a2e1f-bde0-4698-a31c-366cd4b00fe5, chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], logical_port=22bc2188-3978-476c-a2b1-0107e1eeb4cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.656 92053 INFO neutron.agent.ovn.metadata.agent [-] Port 22bc2188-3978-476c-a2b1-0107e1eeb4cd in datapath 7e36da7d-913d-4101-a7c2-e1698abf35be bound to our chassis#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.657 92053 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e36da7d-913d-4101-a7c2-e1698abf35be#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.675 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[15b87aea-7f76-4f59-9f34-1f1edb5e5ea2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.676 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e36da7d-91 in ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 10:02:52 compute-0 systemd-udevd[202498]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.678 192856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e36da7d-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.678 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[5d91b810-219c-48e6-9bed-26af6cb4cbd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 systemd-machined[143379]: New machine qemu-5-instance-0000000c.
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.683 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[05132542-b07a-415c-b546-e40b9cace9ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-0000000c.
Oct  9 10:02:52 compute-0 NetworkManager[982]: <info>  [1760004172.6943] device (tap22bc2188-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 10:02:52 compute-0 NetworkManager[982]: <info>  [1760004172.6952] device (tap22bc2188-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.700 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[3d277fc9-1eb2-4bd4-ae89-7364861eeb33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.724 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[33512686-21ef-44b2-92e0-9c7725bceed6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.750 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fbd03d-b149-40a2-99e8-689209cbaef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.765 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[48fe772f-d6cf-4232-a011-3dd99a0fc857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 NetworkManager[982]: <info>  [1760004172.7663] manager: (tap7e36da7d-90): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.801 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[46c61528-f398-4ae9-b46b-c4b52e4569f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.803 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[43b85a44-5bc2-4359-888b-28ccb00a1466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_controller[83056]: 2025-10-09T10:02:52Z|00072|binding|INFO|Setting lport 22bc2188-3978-476c-a2b1-0107e1eeb4cd ovn-installed in OVS
Oct  9 10:02:52 compute-0 ovn_controller[83056]: 2025-10-09T10:02:52Z|00073|binding|INFO|Setting lport 22bc2188-3978-476c-a2b1-0107e1eeb4cd up in Southbound
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 NetworkManager[982]: <info>  [1760004172.8248] device (tap7e36da7d-90): carrier: link connected
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.828 192891 DEBUG oslo.privsep.daemon [-] privsep: reply[bcaaf42c-9bde-4d40-bccc-63523944a94e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.845 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[deac460a-b9b2-4206-8a2f-493c122e85c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e36da7d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a3:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 185735, 'reachable_time': 18221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 202523, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.861 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5625ad-3f52-4b71-9e24-38b9de949671]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed9:a343'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 185735, 'tstamp': 185735}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 202524, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.888 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[a383da1a-9936-45ab-8d4c-fc8593e061b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e36da7d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a3:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 185735, 'reachable_time': 18221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 202526, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.918 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a1f80e-b1f9-4f6d-a2ae-dc3dd9443f4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.975 2 DEBUG nova.compute.manager [req-b73f864b-c2f0-4e6d-bc12-799560b533bc req-9c954744-04ce-48e6-aca1-31e594587903 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received event network-vif-plugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.975 2 DEBUG oslo_concurrency.lockutils [req-b73f864b-c2f0-4e6d-bc12-799560b533bc req-9c954744-04ce-48e6-aca1-31e594587903 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.976 2 DEBUG oslo_concurrency.lockutils [req-b73f864b-c2f0-4e6d-bc12-799560b533bc req-9c954744-04ce-48e6-aca1-31e594587903 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.976 2 DEBUG oslo_concurrency.lockutils [req-b73f864b-c2f0-4e6d-bc12-799560b533bc req-9c954744-04ce-48e6-aca1-31e594587903 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.976 2 DEBUG nova.compute.manager [req-b73f864b-c2f0-4e6d-bc12-799560b533bc req-9c954744-04ce-48e6-aca1-31e594587903 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Processing event network-vif-plugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.977 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[3d39b7ae-50e5-42a3-81b9-663cad2952c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.978 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e36da7d-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.978 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.979 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e36da7d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:52 compute-0 kernel: tap7e36da7d-90: entered promiscuous mode
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-0 NetworkManager[982]: <info>  [1760004172.9812] manager: (tap7e36da7d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct  9 10:02:52 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:52.983 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e36da7d-90, col_values=(('external_ids', {'iface-id': 'e74168ad-5871-4088-b5cd-db351251a793'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:52 compute-0 ovn_controller[83056]: 2025-10-09T10:02:52Z|00074|binding|INFO|Releasing lport e74168ad-5871-4088-b5cd-db351251a793 from this chassis (sb_readonly=0)
Oct  9 10:02:52 compute-0 nova_compute[187439]: 2025-10-09 10:02:52.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:53.009 92053 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e36da7d-913d-4101-a7c2-e1698abf35be.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e36da7d-913d-4101-a7c2-e1698abf35be.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:53.009 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5aad45-01cc-4eb8-91a1-a2930ee72565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:53.010 92053 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: global
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    log         /dev/log local0 debug
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    log-tag     haproxy-metadata-proxy-7e36da7d-913d-4101-a7c2-e1698abf35be
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    user        root
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    group       root
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    maxconn     1024
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    pidfile     /var/lib/neutron/external/pids/7e36da7d-913d-4101-a7c2-e1698abf35be.pid.haproxy
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    daemon
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: 
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: defaults
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    log global
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    mode http
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    option httplog
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    option dontlognull
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    option http-server-close
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    option forwardfor
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    retries                 3
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    timeout http-request    30s
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    timeout connect         30s
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    timeout client          32s
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    timeout server          32s
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    timeout http-keep-alive 30s
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: 
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: 
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: listen listener
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    bind 169.254.169.254:80
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]:    http-request add-header X-OVN-Network-ID 7e36da7d-913d-4101-a7c2-e1698abf35be
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 10:02:53 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:53.010 92053 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'env', 'PROCESS_TAG=haproxy-7e36da7d-913d-4101-a7c2-e1698abf35be', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e36da7d-913d-4101-a7c2-e1698abf35be.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:53 compute-0 podman[202621]: 2025-10-09 10:02:53.343633677 +0000 UTC m=+0.037187409 container create 267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 10:02:53 compute-0 systemd[1]: Started libpod-conmon-267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a.scope.
Oct  9 10:02:53 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e58e4544d3c0d05330051b5b1693e1f5dc7cacdf226fd5a44d99db672e3315/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:53 compute-0 podman[202621]: 2025-10-09 10:02:53.402109396 +0000 UTC m=+0.095663137 container init 267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  9 10:02:53 compute-0 podman[202621]: 2025-10-09 10:02:53.408038196 +0000 UTC m=+0.101591927 container start 267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  9 10:02:53 compute-0 podman[202621]: 2025-10-09 10:02:53.327630385 +0000 UTC m=+0.021184136 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 10:02:53 compute-0 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[202633]: [NOTICE]   (202637) : New worker (202639) forked
Oct  9 10:02:53 compute-0 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[202633]: [NOTICE]   (202637) : Loading success.
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.545 2 DEBUG nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.546 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760004173.5454264, d7ef9240-faf8-4f56-b3ac-7a3e0830de38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.546 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] VM Started (Lifecycle Event)#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.550 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.552 2 INFO nova.virt.libvirt.driver [-] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Instance spawned successfully.#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.553 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.563 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.568 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.570 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.570 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.571 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.571 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.571 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.571 2 DEBUG nova.virt.libvirt.driver [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.588 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.588 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760004173.5456421, d7ef9240-faf8-4f56-b3ac-7a3e0830de38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.589 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] VM Paused (Lifecycle Event)#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.606 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.608 2 DEBUG nova.virt.driver [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] Emitting event <LifecycleEvent: 1760004173.549279, d7ef9240-faf8-4f56-b3ac-7a3e0830de38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.609 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] VM Resumed (Lifecycle Event)#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.614 2 INFO nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Took 5.07 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.614 2 DEBUG nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.621 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.623 2 DEBUG nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.642 2 INFO nova.compute.manager [None req-799313dd-1f9e-4a03-a400-44ac5b83e3c5 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.660 2 INFO nova.compute.manager [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Took 5.78 seconds to build instance.#033[00m
Oct  9 10:02:53 compute-0 nova_compute[187439]: 2025-10-09 10:02:53.668 2 DEBUG oslo_concurrency.lockutils [None req-0017634a-f2f3-4d62-a39d-fcbdb14d97c5 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:53.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v895: 337 pgs: 337 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct  9 10:02:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:54.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.049 2 DEBUG nova.compute.manager [req-cb673ecf-6505-40e7-b60e-4c321249d222 req-539953a1-0320-49b5-a5f3-43a75bd4b7e3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received event network-vif-plugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.050 2 DEBUG oslo_concurrency.lockutils [req-cb673ecf-6505-40e7-b60e-4c321249d222 req-539953a1-0320-49b5-a5f3-43a75bd4b7e3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.051 2 DEBUG oslo_concurrency.lockutils [req-cb673ecf-6505-40e7-b60e-4c321249d222 req-539953a1-0320-49b5-a5f3-43a75bd4b7e3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.051 2 DEBUG oslo_concurrency.lockutils [req-cb673ecf-6505-40e7-b60e-4c321249d222 req-539953a1-0320-49b5-a5f3-43a75bd4b7e3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.051 2 DEBUG nova.compute.manager [req-cb673ecf-6505-40e7-b60e-4c321249d222 req-539953a1-0320-49b5-a5f3-43a75bd4b7e3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] No waiting events found dispatching network-vif-plugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.051 2 WARNING nova.compute.manager [req-cb673ecf-6505-40e7-b60e-4c321249d222 req-539953a1-0320-49b5-a5f3-43a75bd4b7e3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received unexpected event network-vif-plugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd for instance with vm_state active and task_state None.#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.266 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.266 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.266 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.267 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:02:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1152412393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.637 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.688 2 DEBUG nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.688 2 DEBUG nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 10:02:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:55.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:55 compute-0 podman[202669]: 2025-10-09 10:02:55.764744471 +0000 UTC m=+0.090061372 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid)
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.959 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.962 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4522MB free_disk=59.94662857055664GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.963 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:55 compute-0 nova_compute[187439]: 2025-10-09 10:02:55.963 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v896: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 235 op/s
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.019 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Instance d7ef9240-faf8-4f56-b3ac-7a3e0830de38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.019 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.019 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.043 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:56.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:56.157 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:02:56 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:02:56.158 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.412 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.416 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.430 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:02:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.444 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.445 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:56 compute-0 nova_compute[187439]: 2025-10-09 10:02:56.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:57 compute-0 nova_compute[187439]: 2025-10-09 10:02:57.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:57.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:57.095Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:57.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:57.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:57 compute-0 nova_compute[187439]: 2025-10-09 10:02:57.410 2 DEBUG nova.compute.manager [req-c191bd95-031e-4f76-a5f9-ac737a826098 req-e1370101-0cb5-4f9d-8955-472129810a17 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received event network-changed-22bc2188-3978-476c-a2b1-0107e1eeb4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:57 compute-0 nova_compute[187439]: 2025-10-09 10:02:57.410 2 DEBUG nova.compute.manager [req-c191bd95-031e-4f76-a5f9-ac737a826098 req-e1370101-0cb5-4f9d-8955-472129810a17 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Refreshing instance network info cache due to event network-changed-22bc2188-3978-476c-a2b1-0107e1eeb4cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:02:57 compute-0 nova_compute[187439]: 2025-10-09 10:02:57.411 2 DEBUG oslo_concurrency.lockutils [req-c191bd95-031e-4f76-a5f9-ac737a826098 req-e1370101-0cb5-4f9d-8955-472129810a17 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:02:57 compute-0 nova_compute[187439]: 2025-10-09 10:02:57.411 2 DEBUG oslo_concurrency.lockutils [req-c191bd95-031e-4f76-a5f9-ac737a826098 req-e1370101-0cb5-4f9d-8955-472129810a17 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:02:57 compute-0 nova_compute[187439]: 2025-10-09 10:02:57.411 2 DEBUG nova.network.neutron [req-c191bd95-031e-4f76-a5f9-ac737a826098 req-e1370101-0cb5-4f9d-8955-472129810a17 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Refreshing network info cache for port 22bc2188-3978-476c-a2b1-0107e1eeb4cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:02:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:57.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v897: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct  9 10:02:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:02:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:58.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:58 compute-0 nova_compute[187439]: 2025-10-09 10:02:58.243 2 DEBUG nova.network.neutron [req-c191bd95-031e-4f76-a5f9-ac737a826098 req-e1370101-0cb5-4f9d-8955-472129810a17 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Updated VIF entry in instance network info cache for port 22bc2188-3978-476c-a2b1-0107e1eeb4cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:02:58 compute-0 nova_compute[187439]: 2025-10-09 10:02:58.245 2 DEBUG nova.network.neutron [req-c191bd95-031e-4f76-a5f9-ac737a826098 req-e1370101-0cb5-4f9d-8955-472129810a17 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Updating instance_info_cache with network_info: [{"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:58 compute-0 nova_compute[187439]: 2025-10-09 10:02:58.257 2 DEBUG oslo_concurrency.lockutils [req-c191bd95-031e-4f76-a5f9-ac737a826098 req-e1370101-0cb5-4f9d-8955-472129810a17 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:02:58 compute-0 nova_compute[187439]: 2025-10-09 10:02:58.445 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:58 compute-0 nova_compute[187439]: 2025-10-09 10:02:58.461 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:58 compute-0 nova_compute[187439]: 2025-10-09 10:02:58.461 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:58.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:58.927Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:58.927Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:02:58.927Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:02:59 compute-0 nova_compute[187439]: 2025-10-09 10:02:59.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001105079320505307 of space, bias 1.0, pg target 0.3315237961515921 quantized to 32 (current 32)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:02:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:02:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:59.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v898: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct  9 10:03:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:03:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:00.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:03:00 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:00.169 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:00 compute-0 nova_compute[187439]: 2025-10-09 10:03:00.245 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:00 compute-0 nova_compute[187439]: 2025-10-09 10:03:00.245 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:00 compute-0 nova_compute[187439]: 2025-10-09 10:03:00.245 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:03:00 compute-0 podman[202714]: 2025-10-09 10:03:00.608796428 +0000 UTC m=+0.045962361 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  9 10:03:01 compute-0 nova_compute[187439]: 2025-10-09 10:03:01.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:01 compute-0 nova_compute[187439]: 2025-10-09 10:03:01.247 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:03:01 compute-0 nova_compute[187439]: 2025-10-09 10:03:01.247 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:03:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:03:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:01.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:03:01 compute-0 nova_compute[187439]: 2025-10-09 10:03:01.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v899: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct  9 10:03:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:02 compute-0 nova_compute[187439]: 2025-10-09 10:03:02.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:02 compute-0 nova_compute[187439]: 2025-10-09 10:03:02.126 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:03:02 compute-0 nova_compute[187439]: 2025-10-09 10:03:02.127 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquired lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:03:02 compute-0 nova_compute[187439]: 2025-10-09 10:03:02.127 2 DEBUG nova.network.neutron [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  9 10:03:02 compute-0 nova_compute[187439]: 2025-10-09 10:03:02.127 2 DEBUG nova.objects.instance [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d7ef9240-faf8-4f56-b3ac-7a3e0830de38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:03:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:02.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:02] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Oct  9 10:03:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:02] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Oct  9 10:03:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:03.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v900: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct  9 10:03:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:04.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:04 compute-0 nova_compute[187439]: 2025-10-09 10:03:04.149 2 DEBUG nova.network.neutron [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Updating instance_info_cache with network_info: [{"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:04 compute-0 nova_compute[187439]: 2025-10-09 10:03:04.160 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Releasing lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:03:04 compute-0 nova_compute[187439]: 2025-10-09 10:03:04.160 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  9 10:03:04 compute-0 nova_compute[187439]: 2025-10-09 10:03:04.160 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:03:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:03:04 compute-0 ovn_controller[83056]: 2025-10-09T10:03:04Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:ec:98 10.100.0.13
Oct  9 10:03:04 compute-0 ovn_controller[83056]: 2025-10-09T10:03:04Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:ec:98 10.100.0.13
Oct  9 10:03:05 compute-0 podman[202734]: 2025-10-09 10:03:05.608845841 +0000 UTC m=+0.049611648 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct  9 10:03:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:05.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v901: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Oct  9 10:03:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:06.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:06 compute-0 nova_compute[187439]: 2025-10-09 10:03:06.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:07 compute-0 nova_compute[187439]: 2025-10-09 10:03:07.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:07.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:07.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:07.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:07.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:07.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:07 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v902: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  9 10:03:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:08.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:08.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:08.928Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:08.928Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:08.929Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:09.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:09 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v903: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  9 10:03:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:10.116 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:10.116 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:10.117 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:10.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:03:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:11.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:03:11 compute-0 nova_compute[187439]: 2025-10-09 10:03:11.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:03:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3114795455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:03:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:03:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3114795455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:03:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v904: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Oct  9 10:03:12 compute-0 nova_compute[187439]: 2025-10-09 10:03:12.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:12.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:12] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Oct  9 10:03:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:12] "GET /metrics HTTP/1.1" 200 48550 "" "Prometheus/2.51.0"
Oct  9 10:03:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:13.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v905: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  9 10:03:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:03:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:14.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:03:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:15.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v906: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  9 10:03:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:16.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:16 compute-0 podman[202790]: 2025-10-09 10:03:16.626511733 +0000 UTC m=+0.061950246 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller)
Oct  9 10:03:16 compute-0 nova_compute[187439]: 2025-10-09 10:03:16.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:17 compute-0 nova_compute[187439]: 2025-10-09 10:03:17.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:17.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:17.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:17.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:17.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:03:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:17.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:03:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v907: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct  9 10:03:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:18.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:18.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:18.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:18.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:18.932Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.196 2 DEBUG oslo_concurrency.lockutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.196 2 DEBUG oslo_concurrency.lockutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.197 2 DEBUG oslo_concurrency.lockutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.197 2 DEBUG oslo_concurrency.lockutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.197 2 DEBUG oslo_concurrency.lockutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.198 2 INFO nova.compute.manager [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Terminating instance#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.199 2 DEBUG nova.compute.manager [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 10:03:19 compute-0 kernel: tap22bc2188-39 (unregistering): left promiscuous mode
Oct  9 10:03:19 compute-0 NetworkManager[982]: <info>  [1760004199.2390] device (tap22bc2188-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:03:19 compute-0 ovn_controller[83056]: 2025-10-09T10:03:19Z|00075|binding|INFO|Releasing lport 22bc2188-3978-476c-a2b1-0107e1eeb4cd from this chassis (sb_readonly=0)
Oct  9 10:03:19 compute-0 ovn_controller[83056]: 2025-10-09T10:03:19Z|00076|binding|INFO|Setting lport 22bc2188-3978-476c-a2b1-0107e1eeb4cd down in Southbound
Oct  9 10:03:19 compute-0 ovn_controller[83056]: 2025-10-09T10:03:19Z|00077|binding|INFO|Removing iface tap22bc2188-39 ovn-installed in OVS
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.253 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:ec:98 10.100.0.13'], port_security=['fa:16:3e:0b:ec:98 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7ef9240-faf8-4f56-b3ac-7a3e0830de38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e36da7d-913d-4101-a7c2-e1698abf35be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8dd992f-cc21-4be4-9d79-7a1b6fb1cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49a2e1f-bde0-4698-a31c-366cd4b00fe5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>], logical_port=22bc2188-3978-476c-a2b1-0107e1eeb4cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f406a6797f0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.255 92053 INFO neutron.agent.ovn.metadata.agent [-] Port 22bc2188-3978-476c-a2b1-0107e1eeb4cd in datapath 7e36da7d-913d-4101-a7c2-e1698abf35be unbound from our chassis#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.256 92053 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e36da7d-913d-4101-a7c2-e1698abf35be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.262 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[b942cf39-82fa-45ed-8436-c4b7510a378a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.264 92053 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be namespace which is not needed anymore#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct  9 10:03:19 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Consumed 11.862s CPU time.
Oct  9 10:03:19 compute-0 systemd-machined[143379]: Machine qemu-5-instance-0000000c terminated.
Oct  9 10:03:19 compute-0 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[202633]: [NOTICE]   (202637) : haproxy version is 2.8.14-c23fe91
Oct  9 10:03:19 compute-0 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[202633]: [NOTICE]   (202637) : path to executable is /usr/sbin/haproxy
Oct  9 10:03:19 compute-0 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[202633]: [ALERT]    (202637) : Current worker (202639) exited with code 143 (Terminated)
Oct  9 10:03:19 compute-0 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[202633]: [WARNING]  (202637) : All workers exited. Exiting... (0)
Oct  9 10:03:19 compute-0 systemd[1]: libpod-267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a.scope: Deactivated successfully.
Oct  9 10:03:19 compute-0 podman[202836]: 2025-10-09 10:03:19.37155434 +0000 UTC m=+0.037269844 container died 267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 10:03:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a-userdata-shm.mount: Deactivated successfully.
Oct  9 10:03:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0e58e4544d3c0d05330051b5b1693e1f5dc7cacdf226fd5a44d99db672e3315-merged.mount: Deactivated successfully.
Oct  9 10:03:19 compute-0 podman[202836]: 2025-10-09 10:03:19.401800629 +0000 UTC m=+0.067516123 container cleanup 267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:03:19 compute-0 systemd[1]: libpod-conmon-267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a.scope: Deactivated successfully.
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.428 2 INFO nova.virt.libvirt.driver [-] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Instance destroyed successfully.#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.428 2 DEBUG nova.objects.instance [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid d7ef9240-faf8-4f56-b3ac-7a3e0830de38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.437 2 DEBUG nova.virt.libvirt.vif [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T10:02:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1006874931',display_name='tempest-TestNetworkBasicOps-server-1006874931',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1006874931',id=12,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDdjFIQj4FOLYlCs6zljk6wKa9pI2ISqD9Sb6SVhatdV3gRq8sNB/xPPzWRU7uKoU0bIS8yl5sqGcf3FjrbOxRvx3JpBVSln6lZ2WQLyfYlAFw2+zDNMalVPKJfvSSdSA==',key_name='tempest-TestNetworkBasicOps-17187709',keypairs=<?>,launch_index=0,launched_at=2025-10-09T10:02:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-zj7fuszu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T10:02:53Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=d7ef9240-faf8-4f56-b3ac-7a3e0830de38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.437 2 DEBUG nova.network.os_vif_util [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.438 2 DEBUG nova.network.os_vif_util [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:ec:98,bridge_name='br-int',has_traffic_filtering=True,id=22bc2188-3978-476c-a2b1-0107e1eeb4cd,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22bc2188-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.438 2 DEBUG os_vif [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:ec:98,bridge_name='br-int',has_traffic_filtering=True,id=22bc2188-3978-476c-a2b1-0107e1eeb4cd,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22bc2188-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.440 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22bc2188-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.452 2 INFO os_vif [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:ec:98,bridge_name='br-int',has_traffic_filtering=True,id=22bc2188-3978-476c-a2b1-0107e1eeb4cd,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22bc2188-39')#033[00m
Oct  9 10:03:19 compute-0 podman[202861]: 2025-10-09 10:03:19.466634607 +0000 UTC m=+0.040290816 container remove 267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.471 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad5428d-1fd5-436a-a478-d6a6528b7659]: (4, ('Thu Oct  9 10:03:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be (267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a)\n267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a\nThu Oct  9 10:03:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be (267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a)\n267d6c751e8f411e75f22616a4bbaaa51988b0acb10ce4efc24665dcf3570b0a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.472 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[89859173-f9fd-4245-a79c-d15b76f4f2a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.473 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e36da7d-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 kernel: tap7e36da7d-90: left promiscuous mode
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.480 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[b3bf9b91-c862-482d-bc8e-45bf81b497c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.504 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[6f862d65-34ba-4716-9c56-bdcc6b54f61a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.505 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[01d0f4a3-ac3b-4ef6-a97d-0a22800e6b7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.518 192856 DEBUG oslo.privsep.daemon [-] privsep: reply[29f53003-2925-471f-ad05-23ab413ac619]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 185727, 'reachable_time': 15331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 202897, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.521 92357 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 10:03:19 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:03:19.521 92357 DEBUG oslo.privsep.daemon [-] privsep: reply[8147b65f-be33-41cb-8dd6-0b55132df452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d7e36da7d\x2d913d\x2d4101\x2da7c2\x2de1698abf35be.mount: Deactivated successfully.
Oct  9 10:03:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:03:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:03:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:03:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.634 2 INFO nova.virt.libvirt.driver [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Deleting instance files /var/lib/nova/instances/d7ef9240-faf8-4f56-b3ac-7a3e0830de38_del#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.635 2 INFO nova.virt.libvirt.driver [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Deletion of /var/lib/nova/instances/d7ef9240-faf8-4f56-b3ac-7a3e0830de38_del complete#033[00m
Oct  9 10:03:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:03:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.678 2 INFO nova.compute.manager [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.678 2 DEBUG oslo.service.loopingcall [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.679 2 DEBUG nova.compute.manager [-] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.679 2 DEBUG nova.network.neutron [-] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 10:03:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:03:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:03:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:19.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.953 2 DEBUG nova.compute.manager [req-5a322438-6fe8-408d-a79f-b7772c5b2443 req-bd377534-2071-446c-a39b-ed24015ca6a1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received event network-changed-22bc2188-3978-476c-a2b1-0107e1eeb4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.953 2 DEBUG nova.compute.manager [req-5a322438-6fe8-408d-a79f-b7772c5b2443 req-bd377534-2071-446c-a39b-ed24015ca6a1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Refreshing instance network info cache due to event network-changed-22bc2188-3978-476c-a2b1-0107e1eeb4cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.954 2 DEBUG oslo_concurrency.lockutils [req-5a322438-6fe8-408d-a79f-b7772c5b2443 req-bd377534-2071-446c-a39b-ed24015ca6a1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.954 2 DEBUG oslo_concurrency.lockutils [req-5a322438-6fe8-408d-a79f-b7772c5b2443 req-bd377534-2071-446c-a39b-ed24015ca6a1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:03:19 compute-0 nova_compute[187439]: 2025-10-09 10:03:19.954 2 DEBUG nova.network.neutron [req-5a322438-6fe8-408d-a79f-b7772c5b2443 req-bd377534-2071-446c-a39b-ed24015ca6a1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Refreshing network info cache for port 22bc2188-3978-476c-a2b1-0107e1eeb4cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:03:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v908: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct  9 10:03:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:20.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.186 2 DEBUG nova.network.neutron [-] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.195 2 INFO nova.compute.manager [-] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Took 0.52 seconds to deallocate network for instance.#033[00m
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.228 2 DEBUG oslo_concurrency.lockutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.229 2 DEBUG oslo_concurrency.lockutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.283 2 DEBUG oslo_concurrency.processutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:20 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:03:20 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378775170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.642 2 DEBUG oslo_concurrency.processutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.647 2 DEBUG nova.compute.provider_tree [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.657 2 DEBUG nova.scheduler.client.report [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.669 2 DEBUG oslo_concurrency.lockutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.692 2 INFO nova.scheduler.client.report [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance d7ef9240-faf8-4f56-b3ac-7a3e0830de38#033[00m
Oct  9 10:03:20 compute-0 nova_compute[187439]: 2025-10-09 10:03:20.744 2 DEBUG oslo_concurrency.lockutils [None req-6701918e-9407-4d2a-83a7-7c23f0d3e6de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.180 2 DEBUG nova.network.neutron [req-5a322438-6fe8-408d-a79f-b7772c5b2443 req-bd377534-2071-446c-a39b-ed24015ca6a1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Updated VIF entry in instance network info cache for port 22bc2188-3978-476c-a2b1-0107e1eeb4cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.181 2 DEBUG nova.network.neutron [req-5a322438-6fe8-408d-a79f-b7772c5b2443 req-bd377534-2071-446c-a39b-ed24015ca6a1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Updating instance_info_cache with network_info: [{"id": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "address": "fa:16:3e:0b:ec:98", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22bc2188-39", "ovs_interfaceid": "22bc2188-3978-476c-a2b1-0107e1eeb4cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.206 2 DEBUG oslo_concurrency.lockutils [req-5a322438-6fe8-408d-a79f-b7772c5b2443 req-bd377534-2071-446c-a39b-ed24015ca6a1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-d7ef9240-faf8-4f56-b3ac-7a3e0830de38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.379 2 DEBUG nova.compute.manager [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received event network-vif-unplugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.379 2 DEBUG oslo_concurrency.lockutils [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.380 2 DEBUG oslo_concurrency.lockutils [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.380 2 DEBUG oslo_concurrency.lockutils [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.380 2 DEBUG nova.compute.manager [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] No waiting events found dispatching network-vif-unplugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.380 2 WARNING nova.compute.manager [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received unexpected event network-vif-unplugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.381 2 DEBUG nova.compute.manager [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received event network-vif-plugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.381 2 DEBUG oslo_concurrency.lockutils [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.381 2 DEBUG oslo_concurrency.lockutils [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.381 2 DEBUG oslo_concurrency.lockutils [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "d7ef9240-faf8-4f56-b3ac-7a3e0830de38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.381 2 DEBUG nova.compute.manager [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] No waiting events found dispatching network-vif-plugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.381 2 WARNING nova.compute.manager [req-f3004c0e-a46d-4e50-93d3-71c5faf1edb3 req-bbaea217-6bc1-48c2-bac9-bd3cf65cab9b b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received unexpected event network-vif-plugged-22bc2188-3978-476c-a2b1-0107e1eeb4cd for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:03:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:21.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:21 compute-0 nova_compute[187439]: 2025-10-09 10:03:21.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v909: 337 pgs: 337 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct  9 10:03:22 compute-0 nova_compute[187439]: 2025-10-09 10:03:22.011 2 DEBUG nova.compute.manager [req-77d41cc1-92ca-4398-9bab-a25ab1693c2b req-1919df84-542d-41ee-94cf-e41131b4f84d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Received event network-vif-deleted-22bc2188-3978-476c-a2b1-0107e1eeb4cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:22.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:22] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Oct  9 10:03:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:22] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Oct  9 10:03:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:23.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v910: 337 pgs: 337 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Oct  9 10:03:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:24.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:24 compute-0 nova_compute[187439]: 2025-10-09 10:03:24.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:25.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v911: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 58 op/s
Oct  9 10:03:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:26.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:26 compute-0 podman[202930]: 2025-10-09 10:03:26.602686242 +0000 UTC m=+0.044324076 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:03:26 compute-0 nova_compute[187439]: 2025-10-09 10:03:26.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:27.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:27.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:27.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:27.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:27 compute-0 nova_compute[187439]: 2025-10-09 10:03:27.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:27.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:27 compute-0 nova_compute[187439]: 2025-10-09 10:03:27.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v912: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct  9 10:03:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:28.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:28.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:28.930Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:28.931Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:28.931Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:29 compute-0 nova_compute[187439]: 2025-10-09 10:03:29.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:29.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v913: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct  9 10:03:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:30 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:30.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:31 compute-0 podman[202953]: 2025-10-09 10:03:31.627902162 +0000 UTC m=+0.063834215 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  9 10:03:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:31.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:31 compute-0 nova_compute[187439]: 2025-10-09 10:03:31.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v914: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 58 op/s
Oct  9 10:03:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:32.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:32] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Oct  9 10:03:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:32] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Oct  9 10:03:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:03:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:33.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:03:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v915: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Oct  9 10:03:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:34.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:34 compute-0 nova_compute[187439]: 2025-10-09 10:03:34.427 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004199.4258728, d7ef9240-faf8-4f56-b3ac-7a3e0830de38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:03:34 compute-0 nova_compute[187439]: 2025-10-09 10:03:34.428 2 INFO nova.compute.manager [-] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] VM Stopped (Lifecycle Event)#033[00m
Oct  9 10:03:34 compute-0 nova_compute[187439]: 2025-10-09 10:03:34.447 2 DEBUG nova.compute.manager [None req-85062e0a-14ec-4c1f-a0e9-a3f0c7f81d70 - - - - - -] [instance: d7ef9240-faf8-4f56-b3ac-7a3e0830de38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:03:34 compute-0 nova_compute[187439]: 2025-10-09 10:03:34.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:03:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:03:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.003000027s ======
Oct  9 10:03:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:35.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000027s
Oct  9 10:03:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v916: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Oct  9 10:03:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:36.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:36 compute-0 podman[203001]: 2025-10-09 10:03:36.636695239 +0000 UTC m=+0.072436514 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:03:36 compute-0 nova_compute[187439]: 2025-10-09 10:03:36.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:37.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:37.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:37.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:37.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:03:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:37.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:03:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v917: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:03:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:38.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:03:38 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:03:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:03:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:03:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:03:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v918: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct  9 10:03:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 10:03:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:03:38 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:03:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:03:38 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:03:38 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:03:38 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:03:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:38.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:38.934Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:38.934Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:38.934Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:39 compute-0 podman[203180]: 2025-10-09 10:03:39.029194083 +0000 UTC m=+0.032767620 container create 23915dbf17a58e3471ce6b33331df04755a7a267ce30c3b59a58746eeb5e9c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wilson, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:03:39 compute-0 systemd[1]: Started libpod-conmon-23915dbf17a58e3471ce6b33331df04755a7a267ce30c3b59a58746eeb5e9c67.scope.
Oct  9 10:03:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:03:39 compute-0 podman[203180]: 2025-10-09 10:03:39.092627722 +0000 UTC m=+0.096201280 container init 23915dbf17a58e3471ce6b33331df04755a7a267ce30c3b59a58746eeb5e9c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wilson, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:03:39 compute-0 podman[203180]: 2025-10-09 10:03:39.097497659 +0000 UTC m=+0.101071206 container start 23915dbf17a58e3471ce6b33331df04755a7a267ce30c3b59a58746eeb5e9c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:03:39 compute-0 podman[203180]: 2025-10-09 10:03:39.09901611 +0000 UTC m=+0.102589647 container attach 23915dbf17a58e3471ce6b33331df04755a7a267ce30c3b59a58746eeb5e9c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:03:39 compute-0 nervous_wilson[203193]: 167 167
Oct  9 10:03:39 compute-0 systemd[1]: libpod-23915dbf17a58e3471ce6b33331df04755a7a267ce30c3b59a58746eeb5e9c67.scope: Deactivated successfully.
Oct  9 10:03:39 compute-0 podman[203180]: 2025-10-09 10:03:39.102553886 +0000 UTC m=+0.106127433 container died 23915dbf17a58e3471ce6b33331df04755a7a267ce30c3b59a58746eeb5e9c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wilson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 10:03:39 compute-0 podman[203180]: 2025-10-09 10:03:39.017969304 +0000 UTC m=+0.021542840 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-51bcb1791b6779a5c8406a7e3e85a331aa78db1cd609e4293d0204928c7459c4-merged.mount: Deactivated successfully.
Oct  9 10:03:39 compute-0 podman[203180]: 2025-10-09 10:03:39.124016184 +0000 UTC m=+0.127589721 container remove 23915dbf17a58e3471ce6b33331df04755a7a267ce30c3b59a58746eeb5e9c67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 10:03:39 compute-0 systemd[1]: libpod-conmon-23915dbf17a58e3471ce6b33331df04755a7a267ce30c3b59a58746eeb5e9c67.scope: Deactivated successfully.
Oct  9 10:03:39 compute-0 podman[203215]: 2025-10-09 10:03:39.270636271 +0000 UTC m=+0.038004429 container create 15ab4084fb91abb99d2037f60db7b1ffc0b9953b95d7811189b8908019fa1a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:03:39 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:03:39 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:39 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:39 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:03:39 compute-0 systemd[1]: Started libpod-conmon-15ab4084fb91abb99d2037f60db7b1ffc0b9953b95d7811189b8908019fa1a10.scope.
Oct  9 10:03:39 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205ae95f5f0d825e8a60a9329545d74b0023f5cd711713915a34e5ec0e2f6fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205ae95f5f0d825e8a60a9329545d74b0023f5cd711713915a34e5ec0e2f6fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205ae95f5f0d825e8a60a9329545d74b0023f5cd711713915a34e5ec0e2f6fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205ae95f5f0d825e8a60a9329545d74b0023f5cd711713915a34e5ec0e2f6fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a205ae95f5f0d825e8a60a9329545d74b0023f5cd711713915a34e5ec0e2f6fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:39 compute-0 podman[203215]: 2025-10-09 10:03:39.346505511 +0000 UTC m=+0.113873689 container init 15ab4084fb91abb99d2037f60db7b1ffc0b9953b95d7811189b8908019fa1a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct  9 10:03:39 compute-0 podman[203215]: 2025-10-09 10:03:39.256176527 +0000 UTC m=+0.023544703 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:03:39 compute-0 podman[203215]: 2025-10-09 10:03:39.351853989 +0000 UTC m=+0.119222147 container start 15ab4084fb91abb99d2037f60db7b1ffc0b9953b95d7811189b8908019fa1a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:03:39 compute-0 podman[203215]: 2025-10-09 10:03:39.353049552 +0000 UTC m=+0.120417710 container attach 15ab4084fb91abb99d2037f60db7b1ffc0b9953b95d7811189b8908019fa1a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 10:03:39 compute-0 nova_compute[187439]: 2025-10-09 10:03:39.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:39 compute-0 boring_bell[203228]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:03:39 compute-0 boring_bell[203228]: --> All data devices are unavailable
Oct  9 10:03:39 compute-0 systemd[1]: libpod-15ab4084fb91abb99d2037f60db7b1ffc0b9953b95d7811189b8908019fa1a10.scope: Deactivated successfully.
Oct  9 10:03:39 compute-0 podman[203215]: 2025-10-09 10:03:39.660712301 +0000 UTC m=+0.428080458 container died 15ab4084fb91abb99d2037f60db7b1ffc0b9953b95d7811189b8908019fa1a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a205ae95f5f0d825e8a60a9329545d74b0023f5cd711713915a34e5ec0e2f6fa-merged.mount: Deactivated successfully.
Oct  9 10:03:39 compute-0 podman[203215]: 2025-10-09 10:03:39.683728437 +0000 UTC m=+0.451096595 container remove 15ab4084fb91abb99d2037f60db7b1ffc0b9953b95d7811189b8908019fa1a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 10:03:39 compute-0 systemd[1]: libpod-conmon-15ab4084fb91abb99d2037f60db7b1ffc0b9953b95d7811189b8908019fa1a10.scope: Deactivated successfully.
Oct  9 10:03:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:39.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:40.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:40 compute-0 podman[203337]: 2025-10-09 10:03:40.182340046 +0000 UTC m=+0.035816836 container create 9a2af06accfcf2d67b223d91896e2422e252feec7fbaf4b38288f8cdc65d7320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:03:40 compute-0 systemd[1]: Started libpod-conmon-9a2af06accfcf2d67b223d91896e2422e252feec7fbaf4b38288f8cdc65d7320.scope.
Oct  9 10:03:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:03:40 compute-0 podman[203337]: 2025-10-09 10:03:40.247668075 +0000 UTC m=+0.101144855 container init 9a2af06accfcf2d67b223d91896e2422e252feec7fbaf4b38288f8cdc65d7320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:03:40 compute-0 podman[203337]: 2025-10-09 10:03:40.253470859 +0000 UTC m=+0.106947649 container start 9a2af06accfcf2d67b223d91896e2422e252feec7fbaf4b38288f8cdc65d7320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mclaren, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 10:03:40 compute-0 podman[203337]: 2025-10-09 10:03:40.25490464 +0000 UTC m=+0.108381581 container attach 9a2af06accfcf2d67b223d91896e2422e252feec7fbaf4b38288f8cdc65d7320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mclaren, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:03:40 compute-0 strange_mclaren[203350]: 167 167
Oct  9 10:03:40 compute-0 podman[203337]: 2025-10-09 10:03:40.257874987 +0000 UTC m=+0.111351777 container died 9a2af06accfcf2d67b223d91896e2422e252feec7fbaf4b38288f8cdc65d7320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mclaren, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:03:40 compute-0 systemd[1]: libpod-9a2af06accfcf2d67b223d91896e2422e252feec7fbaf4b38288f8cdc65d7320.scope: Deactivated successfully.
Oct  9 10:03:40 compute-0 podman[203337]: 2025-10-09 10:03:40.16652609 +0000 UTC m=+0.020002900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-edee011bce0ba8ce5202f55c23ce6e28b53f0bd18f2079cf871b828fc642bfb2-merged.mount: Deactivated successfully.
Oct  9 10:03:40 compute-0 podman[203337]: 2025-10-09 10:03:40.278979852 +0000 UTC m=+0.132456642 container remove 9a2af06accfcf2d67b223d91896e2422e252feec7fbaf4b38288f8cdc65d7320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:03:40 compute-0 systemd[1]: libpod-conmon-9a2af06accfcf2d67b223d91896e2422e252feec7fbaf4b38288f8cdc65d7320.scope: Deactivated successfully.
Oct  9 10:03:40 compute-0 podman[203372]: 2025-10-09 10:03:40.424845047 +0000 UTC m=+0.040103013 container create c5c015852c4bbfc49dfa96e48fdb7ae7d5a707052e15e379875c0303d555052f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 10:03:40 compute-0 systemd[1]: Started libpod-conmon-c5c015852c4bbfc49dfa96e48fdb7ae7d5a707052e15e379875c0303d555052f.scope.
Oct  9 10:03:40 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c22c645f6a8bd29cefc35cd859d7334bffa245ab96ea0e9b46a7e44e42002e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c22c645f6a8bd29cefc35cd859d7334bffa245ab96ea0e9b46a7e44e42002e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c22c645f6a8bd29cefc35cd859d7334bffa245ab96ea0e9b46a7e44e42002e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c22c645f6a8bd29cefc35cd859d7334bffa245ab96ea0e9b46a7e44e42002e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:40 compute-0 podman[203372]: 2025-10-09 10:03:40.409106863 +0000 UTC m=+0.024364849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:03:40 compute-0 podman[203372]: 2025-10-09 10:03:40.511248977 +0000 UTC m=+0.126506942 container init c5c015852c4bbfc49dfa96e48fdb7ae7d5a707052e15e379875c0303d555052f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 10:03:40 compute-0 podman[203372]: 2025-10-09 10:03:40.518004595 +0000 UTC m=+0.133262561 container start c5c015852c4bbfc49dfa96e48fdb7ae7d5a707052e15e379875c0303d555052f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:03:40 compute-0 podman[203372]: 2025-10-09 10:03:40.519271232 +0000 UTC m=+0.134529208 container attach c5c015852c4bbfc49dfa96e48fdb7ae7d5a707052e15e379875c0303d555052f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:03:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v919: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]: {
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:    "1": [
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:        {
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "devices": [
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "/dev/loop3"
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            ],
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "lv_name": "ceph_lv0",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "lv_size": "21470642176",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "name": "ceph_lv0",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "tags": {
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.cluster_name": "ceph",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.crush_device_class": "",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.encrypted": "0",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.osd_id": "1",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.type": "block",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.vdo": "0",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:                "ceph.with_tpm": "0"
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            },
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "type": "block",
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:            "vg_name": "ceph_vg0"
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:        }
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]:    ]
Oct  9 10:03:40 compute-0 stupefied_haibt[203385]: }
Oct  9 10:03:40 compute-0 systemd[1]: libpod-c5c015852c4bbfc49dfa96e48fdb7ae7d5a707052e15e379875c0303d555052f.scope: Deactivated successfully.
Oct  9 10:03:40 compute-0 conmon[203385]: conmon c5c015852c4bbfc49dfa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5c015852c4bbfc49dfa96e48fdb7ae7d5a707052e15e379875c0303d555052f.scope/container/memory.events
Oct  9 10:03:40 compute-0 podman[203372]: 2025-10-09 10:03:40.790813383 +0000 UTC m=+0.406071350 container died c5c015852c4bbfc49dfa96e48fdb7ae7d5a707052e15e379875c0303d555052f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c22c645f6a8bd29cefc35cd859d7334bffa245ab96ea0e9b46a7e44e42002e3-merged.mount: Deactivated successfully.
Oct  9 10:03:40 compute-0 podman[203372]: 2025-10-09 10:03:40.818170128 +0000 UTC m=+0.433428094 container remove c5c015852c4bbfc49dfa96e48fdb7ae7d5a707052e15e379875c0303d555052f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_haibt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:03:40 compute-0 systemd[1]: libpod-conmon-c5c015852c4bbfc49dfa96e48fdb7ae7d5a707052e15e379875c0303d555052f.scope: Deactivated successfully.
Oct  9 10:03:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:03:40 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5872 writes, 25K keys, 5872 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 5872 writes, 5872 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1582 writes, 6726 keys, 1582 commit groups, 1.0 writes per commit group, ingest: 11.47 MB, 0.02 MB/s#012Interval WAL: 1582 writes, 1582 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    368.2      0.11              0.08        14    0.008       0      0       0.0       0.0#012  L6      1/0   11.77 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.2    442.3    378.9      0.44              0.27        13    0.034     66K   6847       0.0       0.0#012 Sum      1/0   11.77 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.2    355.2    376.8      0.54              0.35        27    0.020     66K   6847       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.9    315.8    309.9      0.23              0.14        10    0.023     29K   2536       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    442.3    378.9      0.44              0.27        13    0.034     66K   6847       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    373.2      0.11              0.08        13    0.008       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.039, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 0.5 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557b3d66b350#2 capacity: 304.00 MB usage: 14.03 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000106 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(804,13.50 MB,4.43946%) FilterBlock(28,198.73 KB,0.063841%) IndexBlock(28,348.50 KB,0.111951%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 10:03:41 compute-0 podman[203489]: 2025-10-09 10:03:41.345949113 +0000 UTC m=+0.038051838 container create b8643831bad712ca3d1a8df349fbdf5281712db2e66708d1424acec34f50b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:03:41 compute-0 systemd[1]: Started libpod-conmon-b8643831bad712ca3d1a8df349fbdf5281712db2e66708d1424acec34f50b637.scope.
Oct  9 10:03:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:03:41 compute-0 podman[203489]: 2025-10-09 10:03:41.410227785 +0000 UTC m=+0.102330530 container init b8643831bad712ca3d1a8df349fbdf5281712db2e66708d1424acec34f50b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:03:41 compute-0 podman[203489]: 2025-10-09 10:03:41.415133388 +0000 UTC m=+0.107236114 container start b8643831bad712ca3d1a8df349fbdf5281712db2e66708d1424acec34f50b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 10:03:41 compute-0 hopeful_bhabha[203502]: 167 167
Oct  9 10:03:41 compute-0 podman[203489]: 2025-10-09 10:03:41.418128261 +0000 UTC m=+0.110230986 container attach b8643831bad712ca3d1a8df349fbdf5281712db2e66708d1424acec34f50b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:03:41 compute-0 systemd[1]: libpod-b8643831bad712ca3d1a8df349fbdf5281712db2e66708d1424acec34f50b637.scope: Deactivated successfully.
Oct  9 10:03:41 compute-0 podman[203489]: 2025-10-09 10:03:41.418999162 +0000 UTC m=+0.111101887 container died b8643831bad712ca3d1a8df349fbdf5281712db2e66708d1424acec34f50b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 10:03:41 compute-0 podman[203489]: 2025-10-09 10:03:41.333615915 +0000 UTC m=+0.025718640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-405c067eb694c265d053eb3eb953fedf9c3caa497840a30ba52ac7cbee0d55d2-merged.mount: Deactivated successfully.
Oct  9 10:03:41 compute-0 podman[203489]: 2025-10-09 10:03:41.439542939 +0000 UTC m=+0.131645665 container remove b8643831bad712ca3d1a8df349fbdf5281712db2e66708d1424acec34f50b637 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bhabha, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:03:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:41 compute-0 systemd[1]: libpod-conmon-b8643831bad712ca3d1a8df349fbdf5281712db2e66708d1424acec34f50b637.scope: Deactivated successfully.
Oct  9 10:03:41 compute-0 podman[203525]: 2025-10-09 10:03:41.589775533 +0000 UTC m=+0.039308364 container create d84807530c80bbd63a2625761aacdd27702e51a2822dcb3085658aa810403e52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 10:03:41 compute-0 systemd[1]: Started libpod-conmon-d84807530c80bbd63a2625761aacdd27702e51a2822dcb3085658aa810403e52.scope.
Oct  9 10:03:41 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5caf134ae8a405229bf538f4704492aff6b1db7ad1dc19fe0839ad30a0cd5e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5caf134ae8a405229bf538f4704492aff6b1db7ad1dc19fe0839ad30a0cd5e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5caf134ae8a405229bf538f4704492aff6b1db7ad1dc19fe0839ad30a0cd5e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5caf134ae8a405229bf538f4704492aff6b1db7ad1dc19fe0839ad30a0cd5e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:41 compute-0 podman[203525]: 2025-10-09 10:03:41.664116024 +0000 UTC m=+0.113648845 container init d84807530c80bbd63a2625761aacdd27702e51a2822dcb3085658aa810403e52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:03:41 compute-0 podman[203525]: 2025-10-09 10:03:41.574677606 +0000 UTC m=+0.024210448 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:03:41 compute-0 podman[203525]: 2025-10-09 10:03:41.670323961 +0000 UTC m=+0.119856782 container start d84807530c80bbd63a2625761aacdd27702e51a2822dcb3085658aa810403e52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:03:41 compute-0 podman[203525]: 2025-10-09 10:03:41.67179971 +0000 UTC m=+0.121332532 container attach d84807530c80bbd63a2625761aacdd27702e51a2822dcb3085658aa810403e52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:03:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:41.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:41 compute-0 nova_compute[187439]: 2025-10-09 10:03:41.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:42.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:42] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:03:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:42] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:03:42 compute-0 strange_swartz[203538]: {}
Oct  9 10:03:42 compute-0 lvm[203615]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:03:42 compute-0 lvm[203615]: VG ceph_vg0 finished
Oct  9 10:03:42 compute-0 systemd[1]: libpod-d84807530c80bbd63a2625761aacdd27702e51a2822dcb3085658aa810403e52.scope: Deactivated successfully.
Oct  9 10:03:42 compute-0 systemd[1]: libpod-d84807530c80bbd63a2625761aacdd27702e51a2822dcb3085658aa810403e52.scope: Consumed 1.090s CPU time.
Oct  9 10:03:42 compute-0 podman[203525]: 2025-10-09 10:03:42.328653966 +0000 UTC m=+0.778186787 container died d84807530c80bbd63a2625761aacdd27702e51a2822dcb3085658aa810403e52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5caf134ae8a405229bf538f4704492aff6b1db7ad1dc19fe0839ad30a0cd5e0-merged.mount: Deactivated successfully.
Oct  9 10:03:42 compute-0 podman[203525]: 2025-10-09 10:03:42.354845205 +0000 UTC m=+0.804378026 container remove d84807530c80bbd63a2625761aacdd27702e51a2822dcb3085658aa810403e52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:03:42 compute-0 systemd[1]: libpod-conmon-d84807530c80bbd63a2625761aacdd27702e51a2822dcb3085658aa810403e52.scope: Deactivated successfully.
Oct  9 10:03:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:03:42 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:42 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:03:42 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v920: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:03:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:43 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:43 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:43.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:03:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:44.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:03:44 compute-0 nova_compute[187439]: 2025-10-09 10:03:44.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v921: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:03:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000019s ======
Oct  9 10:03:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:45.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000019s
Oct  9 10:03:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:46.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v922: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Oct  9 10:03:46 compute-0 nova_compute[187439]: 2025-10-09 10:03:46.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:47.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:47.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:47.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:47.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:47 compute-0 podman[203656]: 2025-10-09 10:03:47.641645876 +0000 UTC m=+0.079467921 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  9 10:03:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:47.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:48.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v923: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Oct  9 10:03:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:48.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:48.936Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:48.937Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:48.937Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:49 compute-0 nova_compute[187439]: 2025-10-09 10:03:49.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:03:49
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['.nfs', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', '.mgr', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:03:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:03:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:03:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:49.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:03:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:03:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:50.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v924: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct  9 10:03:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:51.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:51 compute-0 nova_compute[187439]: 2025-10-09 10:03:51.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:52.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:52] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Oct  9 10:03:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:03:52] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Oct  9 10:03:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v925: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  9 10:03:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:53.572Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:53.572Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:53.573Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:53.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:54.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:54 compute-0 nova_compute[187439]: 2025-10-09 10:03:54.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:54 compute-0 nova_compute[187439]: 2025-10-09 10:03:54.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v926: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  9 10:03:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:55.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:56.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.261 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.261 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.261 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.261 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.262 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v927: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct  9 10:03:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:03:56 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/325562166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.626 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:56 compute-0 podman[203737]: 2025-10-09 10:03:56.72384856 +0000 UTC m=+0.060247227 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.887 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.888 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4687MB free_disk=59.96738052368164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.888 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.888 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.930 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.931 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:03:56 compute-0 nova_compute[187439]: 2025-10-09 10:03:56.945 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:03:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:57.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:57.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:57.099Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:57.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:03:57 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526253334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:03:57 compute-0 nova_compute[187439]: 2025-10-09 10:03:57.329 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:57 compute-0 nova_compute[187439]: 2025-10-09 10:03:57.334 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:03:57 compute-0 nova_compute[187439]: 2025-10-09 10:03:57.346 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:03:57 compute-0 nova_compute[187439]: 2025-10-09 10:03:57.360 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:03:57 compute-0 nova_compute[187439]: 2025-10-09 10:03:57.360 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:57.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:58.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:58 compute-0 nova_compute[187439]: 2025-10-09 10:03:58.361 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:58 compute-0 nova_compute[187439]: 2025-10-09 10:03:58.362 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v928: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct  9 10:03:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:58.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 9 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:58.935Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:58.935Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:03:58.935Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:03:59 compute-0 nova_compute[187439]: 2025-10-09 10:03:59.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:03:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:03:59 compute-0 nova_compute[187439]: 2025-10-09 10:03:59.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:03:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:59.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:00.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v929: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct  9 10:04:01 compute-0 nova_compute[187439]: 2025-10-09 10:04:01.242 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:01.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:01 compute-0 nova_compute[187439]: 2025-10-09 10:04:01.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:02.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:02 compute-0 nova_compute[187439]: 2025-10-09 10:04:02.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:02 compute-0 nova_compute[187439]: 2025-10-09 10:04:02.247 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:04:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:02] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Oct  9 10:04:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:02] "GET /metrics HTTP/1.1" 200 48547 "" "Prometheus/2.51.0"
Oct  9 10:04:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v930: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Oct  9 10:04:02 compute-0 podman[203782]: 2025-10-09 10:04:02.610799054 +0000 UTC m=+0.049835269 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  9 10:04:02 compute-0 ovn_controller[83056]: 2025-10-09T10:04:02Z|00078|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  9 10:04:03 compute-0 nova_compute[187439]: 2025-10-09 10:04:03.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:03 compute-0 nova_compute[187439]: 2025-10-09 10:04:03.248 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:04:03 compute-0 nova_compute[187439]: 2025-10-09 10:04:03.248 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:04:03 compute-0 nova_compute[187439]: 2025-10-09 10:04:03.261 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:04:03 compute-0 nova_compute[187439]: 2025-10-09 10:04:03.261 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:03.565Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:03.573Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:03.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:03.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:03.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:04.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:04 compute-0 nova_compute[187439]: 2025-10-09 10:04:04.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v931: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  9 10:04:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:04:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:04:05 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:04:05.002 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:04:05 compute-0 nova_compute[187439]: 2025-10-09 10:04:05.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:04:05.003 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:04:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:05.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:06.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct  9 10:04:06 compute-0 nova_compute[187439]: 2025-10-09 10:04:06.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:07 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:04:07.005 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:04:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:07.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:07.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:07.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:07.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:07 compute-0 podman[203802]: 2025-10-09 10:04:07.615423728 +0000 UTC m=+0.049358821 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 10:04:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:07.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:08.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct  9 10:04:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:08.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:08.939Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:08.939Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:08.940Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:09 compute-0 nova_compute[187439]: 2025-10-09 10:04:09.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:09.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:04:10.117 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:04:10.118 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:04:10.118 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:10.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-0[9729]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Oct  9 10:04:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct  9 10:04:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:11.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:11 compute-0 nova_compute[187439]: 2025-10-09 10:04:11.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:12.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:12] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Oct  9 10:04:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:12] "GET /metrics HTTP/1.1" 200 48548 "" "Prometheus/2.51.0"
Oct  9 10:04:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct  9 10:04:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:13.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:13.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:13.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:13.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:14.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:14 compute-0 nova_compute[187439]: 2025-10-09 10:04:14.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  9 10:04:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:15.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:16.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct  9 10:04:16 compute-0 nova_compute[187439]: 2025-10-09 10:04:16.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:17.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:17.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:17.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:17.101Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:17.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:18.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:04:18 compute-0 podman[203856]: 2025-10-09 10:04:18.636875895 +0000 UTC m=+0.067830124 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:04:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:18.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:18.938Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:18.940Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:18.940Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:19 compute-0 nova_compute[187439]: 2025-10-09 10:04:19.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:04:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:04:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:04:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:04:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:04:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:04:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:04:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:04:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:19.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:20.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v939: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:04:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:21.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:21 compute-0 nova_compute[187439]: 2025-10-09 10:04:21.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:22.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:22] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:04:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:22] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:04:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v940: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:04:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:23.567Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:23.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:23.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:23.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:23.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:24.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:24 compute-0 nova_compute[187439]: 2025-10-09 10:04:24.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:04:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:25.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:26.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v942: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:04:26 compute-0 nova_compute[187439]: 2025-10-09 10:04:26.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:27.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:27.105Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:27.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:27.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:27 compute-0 podman[203888]: 2025-10-09 10:04:27.61440747 +0000 UTC m=+0.047353864 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:04:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:27.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:28.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v943: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:04:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:28.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:28.938Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:28.939Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:28.939Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:29 compute-0 nova_compute[187439]: 2025-10-09 10:04:29.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:29.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:30.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v944: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:04:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:31.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:31 compute-0 nova_compute[187439]: 2025-10-09 10:04:31.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:32.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:32] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:04:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:32] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:04:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v945: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:04:33 compute-0 podman[203935]: 2025-10-09 10:04:33.293052462 +0000 UTC m=+0.037491481 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 10:04:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:33.567Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:33.577Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:33.578Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:33.578Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:33.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:34.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:34 compute-0 nova_compute[187439]: 2025-10-09 10:04:34.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v946: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:04:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:04:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:04:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:35.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.844062) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275844090, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2120, "num_deletes": 251, "total_data_size": 4063885, "memory_usage": 4141904, "flush_reason": "Manual Compaction"}
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275853016, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3946077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24673, "largest_seqno": 26792, "table_properties": {"data_size": 3936844, "index_size": 5727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19559, "raw_average_key_size": 20, "raw_value_size": 3918125, "raw_average_value_size": 4051, "num_data_blocks": 252, "num_entries": 967, "num_filter_entries": 967, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004069, "oldest_key_time": 1760004069, "file_creation_time": 1760004275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 8981 microseconds, and 6563 cpu microseconds.
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.853043) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3946077 bytes OK
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.853056) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854123) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854133) EVENT_LOG_v1 {"time_micros": 1760004275854130, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854156) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4055322, prev total WAL file size 4055322, number of live WAL files 2.
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854764) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3853KB)], [56(11MB)]
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275854791, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16285822, "oldest_snapshot_seqno": -1}
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5798 keys, 14126786 bytes, temperature: kUnknown
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275896077, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14126786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14087565, "index_size": 23623, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 147365, "raw_average_key_size": 25, "raw_value_size": 13982210, "raw_average_value_size": 2411, "num_data_blocks": 962, "num_entries": 5798, "num_filter_entries": 5798, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760004275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.896308) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14126786 bytes
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.896783) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 393.8 rd, 341.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6314, records dropped: 516 output_compression: NoCompression
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.896796) EVENT_LOG_v1 {"time_micros": 1760004275896790, "job": 30, "event": "compaction_finished", "compaction_time_micros": 41356, "compaction_time_cpu_micros": 22459, "output_level": 6, "num_output_files": 1, "total_output_size": 14126786, "num_input_records": 6314, "num_output_records": 5798, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275897302, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275898605, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:04:36 compute-0 nova_compute[187439]: 2025-10-09 10:04:36.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:37.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:37.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:37.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:37.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:38.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:04:38 compute-0 podman[203958]: 2025-10-09 10:04:38.612711914 +0000 UTC m=+0.048268647 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:04:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:38.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:38.938Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:38.938Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:38.938Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:39 compute-0 nova_compute[187439]: 2025-10-09 10:04:39.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:39.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:40.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:04:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:41.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:41 compute-0 nova_compute[187439]: 2025-10-09 10:04:41.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:42.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:42] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:04:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:42] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:04:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:04:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:04:43 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:04:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:04:43 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:04:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:04:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:04:43 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 10:04:43 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:04:43 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:04:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:04:43 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:04:43 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:04:43 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:04:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:43.569Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:43.580Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:43.580Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:43.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:43 compute-0 podman[204139]: 2025-10-09 10:04:43.695281238 +0000 UTC m=+0.033383432 container create ff74f1269f51305f6a0a304186a8a087b4698e8c5d4345dc831c67f386722582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:04:43 compute-0 systemd[1]: Started libpod-conmon-ff74f1269f51305f6a0a304186a8a087b4698e8c5d4345dc831c67f386722582.scope.
Oct  9 10:04:43 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:04:43 compute-0 podman[204139]: 2025-10-09 10:04:43.759784683 +0000 UTC m=+0.097886897 container init ff74f1269f51305f6a0a304186a8a087b4698e8c5d4345dc831c67f386722582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  9 10:04:43 compute-0 podman[204139]: 2025-10-09 10:04:43.76714954 +0000 UTC m=+0.105251734 container start ff74f1269f51305f6a0a304186a8a087b4698e8c5d4345dc831c67f386722582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:04:43 compute-0 podman[204139]: 2025-10-09 10:04:43.769098472 +0000 UTC m=+0.107200686 container attach ff74f1269f51305f6a0a304186a8a087b4698e8c5d4345dc831c67f386722582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:04:43 compute-0 frosty_hermann[204152]: 167 167
Oct  9 10:04:43 compute-0 systemd[1]: libpod-ff74f1269f51305f6a0a304186a8a087b4698e8c5d4345dc831c67f386722582.scope: Deactivated successfully.
Oct  9 10:04:43 compute-0 podman[204139]: 2025-10-09 10:04:43.772597325 +0000 UTC m=+0.110699519 container died ff74f1269f51305f6a0a304186a8a087b4698e8c5d4345dc831c67f386722582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 10:04:43 compute-0 podman[204139]: 2025-10-09 10:04:43.68271493 +0000 UTC m=+0.020817135 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:04:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f03330b892be4e992851c6cd49558fefc6622bfac9f24a328f3d466b36321f44-merged.mount: Deactivated successfully.
Oct  9 10:04:43 compute-0 podman[204139]: 2025-10-09 10:04:43.795098541 +0000 UTC m=+0.133200724 container remove ff74f1269f51305f6a0a304186a8a087b4698e8c5d4345dc831c67f386722582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:04:43 compute-0 systemd[1]: libpod-conmon-ff74f1269f51305f6a0a304186a8a087b4698e8c5d4345dc831c67f386722582.scope: Deactivated successfully.
Oct  9 10:04:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:43 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:04:43 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:43 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:43 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:04:43 compute-0 podman[204173]: 2025-10-09 10:04:43.954504969 +0000 UTC m=+0.042810223 container create 90015cd1ce09b0f9ca616bf96c09d8bccde5d67d46afa53d444073722f1a6fd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:04:43 compute-0 systemd[1]: Started libpod-conmon-90015cd1ce09b0f9ca616bf96c09d8bccde5d67d46afa53d444073722f1a6fd2.scope.
Oct  9 10:04:44 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8d1f1a77080995fe57ac828f8587beaa36d52f3655f37f1fb4e9bf0d219ea0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8d1f1a77080995fe57ac828f8587beaa36d52f3655f37f1fb4e9bf0d219ea0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8d1f1a77080995fe57ac828f8587beaa36d52f3655f37f1fb4e9bf0d219ea0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8d1f1a77080995fe57ac828f8587beaa36d52f3655f37f1fb4e9bf0d219ea0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8d1f1a77080995fe57ac828f8587beaa36d52f3655f37f1fb4e9bf0d219ea0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:44 compute-0 podman[204173]: 2025-10-09 10:04:43.936999405 +0000 UTC m=+0.025304669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:04:44 compute-0 podman[204173]: 2025-10-09 10:04:44.035913487 +0000 UTC m=+0.124218741 container init 90015cd1ce09b0f9ca616bf96c09d8bccde5d67d46afa53d444073722f1a6fd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:04:44 compute-0 podman[204173]: 2025-10-09 10:04:44.041571066 +0000 UTC m=+0.129876321 container start 90015cd1ce09b0f9ca616bf96c09d8bccde5d67d46afa53d444073722f1a6fd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mccarthy, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 10:04:44 compute-0 podman[204173]: 2025-10-09 10:04:44.042986484 +0000 UTC m=+0.131291737 container attach 90015cd1ce09b0f9ca616bf96c09d8bccde5d67d46afa53d444073722f1a6fd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mccarthy, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:04:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:44.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:44 compute-0 elated_mccarthy[204186]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:04:44 compute-0 elated_mccarthy[204186]: --> All data devices are unavailable
Oct  9 10:04:44 compute-0 systemd[1]: libpod-90015cd1ce09b0f9ca616bf96c09d8bccde5d67d46afa53d444073722f1a6fd2.scope: Deactivated successfully.
Oct  9 10:04:44 compute-0 conmon[204186]: conmon 90015cd1ce09b0f9ca61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90015cd1ce09b0f9ca616bf96c09d8bccde5d67d46afa53d444073722f1a6fd2.scope/container/memory.events
Oct  9 10:04:44 compute-0 podman[204173]: 2025-10-09 10:04:44.330224339 +0000 UTC m=+0.418529593 container died 90015cd1ce09b0f9ca616bf96c09d8bccde5d67d46afa53d444073722f1a6fd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 10:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b8d1f1a77080995fe57ac828f8587beaa36d52f3655f37f1fb4e9bf0d219ea0-merged.mount: Deactivated successfully.
Oct  9 10:04:44 compute-0 podman[204173]: 2025-10-09 10:04:44.353653883 +0000 UTC m=+0.441959137 container remove 90015cd1ce09b0f9ca616bf96c09d8bccde5d67d46afa53d444073722f1a6fd2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mccarthy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct  9 10:04:44 compute-0 systemd[1]: libpod-conmon-90015cd1ce09b0f9ca616bf96c09d8bccde5d67d46afa53d444073722f1a6fd2.scope: Deactivated successfully.
Oct  9 10:04:44 compute-0 nova_compute[187439]: 2025-10-09 10:04:44.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:44 compute-0 podman[204295]: 2025-10-09 10:04:44.803753077 +0000 UTC m=+0.032105874 container create 6a54e85f4699d52c33c1dd72b6766b89cc6d6c405bf036c196bc32c3609ffdc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:04:44 compute-0 systemd[1]: Started libpod-conmon-6a54e85f4699d52c33c1dd72b6766b89cc6d6c405bf036c196bc32c3609ffdc0.scope.
Oct  9 10:04:44 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:04:44 compute-0 podman[204295]: 2025-10-09 10:04:44.869801704 +0000 UTC m=+0.098154501 container init 6a54e85f4699d52c33c1dd72b6766b89cc6d6c405bf036c196bc32c3609ffdc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:04:44 compute-0 podman[204295]: 2025-10-09 10:04:44.875134131 +0000 UTC m=+0.103486928 container start 6a54e85f4699d52c33c1dd72b6766b89cc6d6c405bf036c196bc32c3609ffdc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 10:04:44 compute-0 podman[204295]: 2025-10-09 10:04:44.877290094 +0000 UTC m=+0.105642890 container attach 6a54e85f4699d52c33c1dd72b6766b89cc6d6c405bf036c196bc32c3609ffdc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:04:44 compute-0 amazing_swirles[204308]: 167 167
Oct  9 10:04:44 compute-0 systemd[1]: libpod-6a54e85f4699d52c33c1dd72b6766b89cc6d6c405bf036c196bc32c3609ffdc0.scope: Deactivated successfully.
Oct  9 10:04:44 compute-0 podman[204295]: 2025-10-09 10:04:44.881033948 +0000 UTC m=+0.109386744 container died 6a54e85f4699d52c33c1dd72b6766b89cc6d6c405bf036c196bc32c3609ffdc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 10:04:44 compute-0 podman[204295]: 2025-10-09 10:04:44.789896038 +0000 UTC m=+0.018248855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-59077b2064322b4232c5f451422e2b2c73fbea2d2b4fbc2f57529cf0785d0192-merged.mount: Deactivated successfully.
Oct  9 10:04:44 compute-0 podman[204295]: 2025-10-09 10:04:44.901616899 +0000 UTC m=+0.129969696 container remove 6a54e85f4699d52c33c1dd72b6766b89cc6d6c405bf036c196bc32c3609ffdc0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:04:44 compute-0 systemd[1]: libpod-conmon-6a54e85f4699d52c33c1dd72b6766b89cc6d6c405bf036c196bc32c3609ffdc0.scope: Deactivated successfully.
Oct  9 10:04:45 compute-0 podman[204330]: 2025-10-09 10:04:45.044611796 +0000 UTC m=+0.038548623 container create 0c4135d02f342560180626f9bf9de46ea611c45dd5a809186e3397006daf1601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 10:04:45 compute-0 systemd[1]: Started libpod-conmon-0c4135d02f342560180626f9bf9de46ea611c45dd5a809186e3397006daf1601.scope.
Oct  9 10:04:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca549f5f0b7d0bcd50e7be70dd6e664bf33807efb725f76c312959fdb272247/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca549f5f0b7d0bcd50e7be70dd6e664bf33807efb725f76c312959fdb272247/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca549f5f0b7d0bcd50e7be70dd6e664bf33807efb725f76c312959fdb272247/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca549f5f0b7d0bcd50e7be70dd6e664bf33807efb725f76c312959fdb272247/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:45 compute-0 podman[204330]: 2025-10-09 10:04:45.118066857 +0000 UTC m=+0.112003674 container init 0c4135d02f342560180626f9bf9de46ea611c45dd5a809186e3397006daf1601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_goodall, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:04:45 compute-0 podman[204330]: 2025-10-09 10:04:45.124044381 +0000 UTC m=+0.117981197 container start 0c4135d02f342560180626f9bf9de46ea611c45dd5a809186e3397006daf1601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 10:04:45 compute-0 podman[204330]: 2025-10-09 10:04:45.125272444 +0000 UTC m=+0.119209261 container attach 0c4135d02f342560180626f9bf9de46ea611c45dd5a809186e3397006daf1601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_goodall, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:04:45 compute-0 podman[204330]: 2025-10-09 10:04:45.030977877 +0000 UTC m=+0.024914704 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:04:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:04:45 compute-0 magical_goodall[204343]: {
Oct  9 10:04:45 compute-0 magical_goodall[204343]:    "1": [
Oct  9 10:04:45 compute-0 magical_goodall[204343]:        {
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "devices": [
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "/dev/loop3"
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            ],
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "lv_name": "ceph_lv0",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "lv_size": "21470642176",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "name": "ceph_lv0",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "tags": {
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.cluster_name": "ceph",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.crush_device_class": "",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.encrypted": "0",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.osd_id": "1",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.type": "block",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.vdo": "0",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:                "ceph.with_tpm": "0"
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            },
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "type": "block",
Oct  9 10:04:45 compute-0 magical_goodall[204343]:            "vg_name": "ceph_vg0"
Oct  9 10:04:45 compute-0 magical_goodall[204343]:        }
Oct  9 10:04:45 compute-0 magical_goodall[204343]:    ]
Oct  9 10:04:45 compute-0 magical_goodall[204343]: }
Oct  9 10:04:45 compute-0 systemd[1]: libpod-0c4135d02f342560180626f9bf9de46ea611c45dd5a809186e3397006daf1601.scope: Deactivated successfully.
Oct  9 10:04:45 compute-0 podman[204330]: 2025-10-09 10:04:45.383560041 +0000 UTC m=+0.377496859 container died 0c4135d02f342560180626f9bf9de46ea611c45dd5a809186e3397006daf1601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_goodall, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:04:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ca549f5f0b7d0bcd50e7be70dd6e664bf33807efb725f76c312959fdb272247-merged.mount: Deactivated successfully.
Oct  9 10:04:45 compute-0 podman[204330]: 2025-10-09 10:04:45.410040536 +0000 UTC m=+0.403977353 container remove 0c4135d02f342560180626f9bf9de46ea611c45dd5a809186e3397006daf1601 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_goodall, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 10:04:45 compute-0 systemd[1]: libpod-conmon-0c4135d02f342560180626f9bf9de46ea611c45dd5a809186e3397006daf1601.scope: Deactivated successfully.
Oct  9 10:04:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:45.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:45 compute-0 podman[204442]: 2025-10-09 10:04:45.882669709 +0000 UTC m=+0.038443125 container create 2c3c332d091edb1c675cdca0f8bd00c860306f02d326d78756e20c0e04bc90b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:04:45 compute-0 systemd[1]: Started libpod-conmon-2c3c332d091edb1c675cdca0f8bd00c860306f02d326d78756e20c0e04bc90b5.scope.
Oct  9 10:04:45 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:04:45 compute-0 podman[204442]: 2025-10-09 10:04:45.941458236 +0000 UTC m=+0.097231653 container init 2c3c332d091edb1c675cdca0f8bd00c860306f02d326d78756e20c0e04bc90b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Oct  9 10:04:45 compute-0 podman[204442]: 2025-10-09 10:04:45.946375112 +0000 UTC m=+0.102148527 container start 2c3c332d091edb1c675cdca0f8bd00c860306f02d326d78756e20c0e04bc90b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:04:45 compute-0 podman[204442]: 2025-10-09 10:04:45.94774395 +0000 UTC m=+0.103517366 container attach 2c3c332d091edb1c675cdca0f8bd00c860306f02d326d78756e20c0e04bc90b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:04:45 compute-0 infallible_lichterman[204455]: 167 167
Oct  9 10:04:45 compute-0 systemd[1]: libpod-2c3c332d091edb1c675cdca0f8bd00c860306f02d326d78756e20c0e04bc90b5.scope: Deactivated successfully.
Oct  9 10:04:45 compute-0 conmon[204455]: conmon 2c3c332d091edb1c675c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2c3c332d091edb1c675cdca0f8bd00c860306f02d326d78756e20c0e04bc90b5.scope/container/memory.events
Oct  9 10:04:45 compute-0 podman[204442]: 2025-10-09 10:04:45.952006482 +0000 UTC m=+0.107779888 container died 2c3c332d091edb1c675cdca0f8bd00c860306f02d326d78756e20c0e04bc90b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:04:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-55af4b4cf8a3b17d892947ce641a0abc4c7300c8813f2756dc306959d9f2c4fc-merged.mount: Deactivated successfully.
Oct  9 10:04:45 compute-0 podman[204442]: 2025-10-09 10:04:45.869717945 +0000 UTC m=+0.025491382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:04:45 compute-0 podman[204442]: 2025-10-09 10:04:45.971086832 +0000 UTC m=+0.126860248 container remove 2c3c332d091edb1c675cdca0f8bd00c860306f02d326d78756e20c0e04bc90b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 10:04:45 compute-0 systemd[1]: libpod-conmon-2c3c332d091edb1c675cdca0f8bd00c860306f02d326d78756e20c0e04bc90b5.scope: Deactivated successfully.
Oct  9 10:04:46 compute-0 podman[204477]: 2025-10-09 10:04:46.104426456 +0000 UTC m=+0.032927021 container create 975633d0f123197715703368a10c529aeeb8d92298f9271a81f04a1a100ebca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_spence, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 10:04:46 compute-0 systemd[1]: Started libpod-conmon-975633d0f123197715703368a10c529aeeb8d92298f9271a81f04a1a100ebca3.scope.
Oct  9 10:04:46 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f3a235fc5947ce7e772d57795590842ed6d17ab54b5b9a655a1fe19f5029cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f3a235fc5947ce7e772d57795590842ed6d17ab54b5b9a655a1fe19f5029cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f3a235fc5947ce7e772d57795590842ed6d17ab54b5b9a655a1fe19f5029cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35f3a235fc5947ce7e772d57795590842ed6d17ab54b5b9a655a1fe19f5029cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:04:46 compute-0 podman[204477]: 2025-10-09 10:04:46.168518036 +0000 UTC m=+0.097018602 container init 975633d0f123197715703368a10c529aeeb8d92298f9271a81f04a1a100ebca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_spence, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 10:04:46 compute-0 podman[204477]: 2025-10-09 10:04:46.17388093 +0000 UTC m=+0.102381495 container start 975633d0f123197715703368a10c529aeeb8d92298f9271a81f04a1a100ebca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:04:46 compute-0 podman[204477]: 2025-10-09 10:04:46.175303972 +0000 UTC m=+0.103804536 container attach 975633d0f123197715703368a10c529aeeb8d92298f9271a81f04a1a100ebca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:04:46 compute-0 podman[204477]: 2025-10-09 10:04:46.092017796 +0000 UTC m=+0.020518371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:04:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:46.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:46 compute-0 lvm[204567]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:04:46 compute-0 lvm[204567]: VG ceph_vg0 finished
Oct  9 10:04:46 compute-0 pensive_spence[204490]: {}
Oct  9 10:04:46 compute-0 systemd[1]: libpod-975633d0f123197715703368a10c529aeeb8d92298f9271a81f04a1a100ebca3.scope: Deactivated successfully.
Oct  9 10:04:46 compute-0 podman[204477]: 2025-10-09 10:04:46.753624666 +0000 UTC m=+0.682125241 container died 975633d0f123197715703368a10c529aeeb8d92298f9271a81f04a1a100ebca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 10:04:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-35f3a235fc5947ce7e772d57795590842ed6d17ab54b5b9a655a1fe19f5029cb-merged.mount: Deactivated successfully.
Oct  9 10:04:46 compute-0 podman[204477]: 2025-10-09 10:04:46.77957981 +0000 UTC m=+0.708080375 container remove 975633d0f123197715703368a10c529aeeb8d92298f9271a81f04a1a100ebca3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 10:04:46 compute-0 systemd[1]: libpod-conmon-975633d0f123197715703368a10c529aeeb8d92298f9271a81f04a1a100ebca3.scope: Deactivated successfully.
Oct  9 10:04:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:04:46 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:04:46 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:46 compute-0 nova_compute[187439]: 2025-10-09 10:04:46.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:47.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:47.103Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:47.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:47.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:04:47 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:47 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:04:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:47.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:04:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:48.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:48.931Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:48.940Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:48.940Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:48.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:04:49 compute-0 nova_compute[187439]: 2025-10-09 10:04:49.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:04:49
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.mgr', '.nfs', '.rgw.root', 'backups', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control']
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:04:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:04:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:04:49 compute-0 podman[204607]: 2025-10-09 10:04:49.637021997 +0000 UTC m=+0.064744541 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:04:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:04:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:49.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:50.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:04:50 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2853 syncs, 3.60 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2923 writes, 9704 keys, 2923 commit groups, 1.0 writes per commit group, ingest: 10.38 MB, 0.02 MB/s#012Interval WAL: 2923 writes, 1348 syncs, 2.17 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  9 10:04:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:04:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:51.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:51 compute-0 nova_compute[187439]: 2025-10-09 10:04:51.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:52.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:52] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:04:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:04:52] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:04:52 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  9 10:04:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:04:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:53.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:53.591Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:53.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:53.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:53.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:54 compute-0 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 10:04:54 compute-0 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 10:04:54 compute-0 systemd-logind[798]: New session 41 of user zuul.
Oct  9 10:04:54 compute-0 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 10:04:54 compute-0 systemd[1]: Starting User Manager for UID 1000...
Oct  9 10:04:54 compute-0 nova_compute[187439]: 2025-10-09 10:04:54.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:54.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:54 compute-0 systemd[204665]: Queued start job for default target Main User Target.
Oct  9 10:04:54 compute-0 systemd[204665]: Created slice User Application Slice.
Oct  9 10:04:54 compute-0 systemd[204665]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:04:54 compute-0 systemd[204665]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 10:04:54 compute-0 systemd[204665]: Reached target Paths.
Oct  9 10:04:54 compute-0 systemd[204665]: Reached target Timers.
Oct  9 10:04:54 compute-0 systemd[204665]: Starting D-Bus User Message Bus Socket...
Oct  9 10:04:54 compute-0 systemd[204665]: Starting Create User's Volatile Files and Directories...
Oct  9 10:04:54 compute-0 systemd[204665]: Finished Create User's Volatile Files and Directories.
Oct  9 10:04:54 compute-0 systemd[204665]: Listening on D-Bus User Message Bus Socket.
Oct  9 10:04:54 compute-0 systemd[204665]: Reached target Sockets.
Oct  9 10:04:54 compute-0 systemd[204665]: Reached target Basic System.
Oct  9 10:04:54 compute-0 systemd[204665]: Reached target Main User Target.
Oct  9 10:04:54 compute-0 systemd[204665]: Startup finished in 125ms.
Oct  9 10:04:54 compute-0 systemd[1]: Started User Manager for UID 1000.
Oct  9 10:04:54 compute-0 systemd[1]: Started Session 41 of User zuul.
Oct  9 10:04:54 compute-0 nova_compute[187439]: 2025-10-09 10:04:54.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:04:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:55.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:56.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.263 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.263 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.264 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.264 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.264 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:04:56 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26359 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:04:56 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26645 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:04:56 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.16800 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:04:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:04:56 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336862096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.616 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:04:56 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26386 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:04:56 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26663 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.862 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.863 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4599MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.863 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.864 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.921 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.921 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:04:56 compute-0 nova_compute[187439]: 2025-10-09 10:04:56.940 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:04:56 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.16827 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:04:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:04:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:57.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:57.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:57.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:57.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:04:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:04:57 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305344509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:04:57 compute-0 nova_compute[187439]: 2025-10-09 10:04:57.302 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:04:57 compute-0 nova_compute[187439]: 2025-10-09 10:04:57.307 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:04:57 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 10:04:57 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/551610347' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 10:04:57 compute-0 nova_compute[187439]: 2025-10-09 10:04:57.365 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:04:57 compute-0 nova_compute[187439]: 2025-10-09 10:04:57.367 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:04:57 compute-0 nova_compute[187439]: 2025-10-09 10:04:57.367 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:57.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:57 compute-0 podman[204965]: 2025-10-09 10:04:57.904862954 +0000 UTC m=+0.079116630 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  9 10:04:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:58.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:58.932Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:58.946Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:58.947Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:04:58.947Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:04:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:04:59 compute-0 nova_compute[187439]: 2025-10-09 10:04:59.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:59 compute-0 ovs-vsctl[205025]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  9 10:04:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:04:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:59.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:05:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:00.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:05:00 compute-0 nova_compute[187439]: 2025-10-09 10:05:00.370 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:00 compute-0 nova_compute[187439]: 2025-10-09 10:05:00.371 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:00 compute-0 virtqemud[187041]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  9 10:05:00 compute-0 virtqemud[187041]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  9 10:05:00 compute-0 virtqemud[187041]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  9 10:05:00 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: cache status {prefix=cache status} (starting...)
Oct  9 10:05:00 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:01 compute-0 lvm[205318]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:05:01 compute-0 lvm[205318]: VG ceph_vg0 finished
Oct  9 10:05:01 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: client ls {prefix=client ls} (starting...)
Oct  9 10:05:01 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:01 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26440 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:01 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26705 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:05:01 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2670855265' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:05:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:05:01 compute-0 kernel: block loop3: the capability attribute has been deprecated.
Oct  9 10:05:01 compute-0 nova_compute[187439]: 2025-10-09 10:05:01.243 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:01 compute-0 rsyslogd[1243]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:05:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:05:01 compute-0 rsyslogd[1243]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:05:01 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:05:01 compute-0 nova_compute[187439]: 2025-10-09 10:05:01.413 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:01 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26461 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:01 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26717 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:05:01 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1435948674' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:05:01 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: damage ls {prefix=damage ls} (starting...)
Oct  9 10:05:01 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:01 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26479 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:01 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26744 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:01 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.16899 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:01 compute-0 nova_compute[187439]: 2025-10-09 10:05:01.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:01.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:01 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump loads {prefix=dump loads} (starting...)
Oct  9 10:05:01 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:05:01 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:05:01 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  9 10:05:01 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:02 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26503 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:02 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26765 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:02 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26509 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:02.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:02] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:05:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:02] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:02 compute-0 nova_compute[187439]: 2025-10-09 10:05:02.413 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:02 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26801 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:02 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26548 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct  9 10:05:02 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242075144' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:02 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26810 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:02 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.16971 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:02 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.16974 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: ops {prefix=ops} (starting...)
Oct  9 10:05:02 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:03 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26831 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct  9 10:05:03 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3551404967' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  9 10:05:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:05:03 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:05:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:03 compute-0 nova_compute[187439]: 2025-10-09 10:05:03.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:03 compute-0 nova_compute[187439]: 2025-10-09 10:05:03.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:05:03 compute-0 nova_compute[187439]: 2025-10-09 10:05:03.247 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:05:03 compute-0 nova_compute[187439]: 2025-10-09 10:05:03.280 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:05:03 compute-0 nova_compute[187439]: 2025-10-09 10:05:03.280 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:03 compute-0 nova_compute[187439]: 2025-10-09 10:05:03.280 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:03 compute-0 nova_compute[187439]: 2025-10-09 10:05:03.280 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:05:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct  9 10:05:03 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/845671599' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  9 10:05:03 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17007 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:05:03 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:05:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:05:03 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2635017389' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:05:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:03.571Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:03.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:03.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:03.581Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:03 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: session ls {prefix=session ls} (starting...)
Oct  9 10:05:03 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:05:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:05:03 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/802223364' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:05:03 compute-0 podman[205749]: 2025-10-09 10:05:03.681803791 +0000 UTC m=+0.117877302 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  9 10:05:03 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17034 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:03 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: status {prefix=status} (starting...)
Oct  9 10:05:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:05:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:03.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:05:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:05:03 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1062635565' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:05:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:05:03 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1096112260' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:05:03 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26644 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T10:05:03.951+0000 7f49f65cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:05:03 compute-0 ceph-mgr[4772]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:05:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:05:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081418931' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:05:04 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26909 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T10:05:04.062+0000 7f49f65cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:05:04 compute-0 ceph-mgr[4772]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:05:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:05:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1240712361' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:05:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:04.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:05:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/461285032' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:05:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct  9 10:05:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331225401' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  9 10:05:04 compute-0 nova_compute[187439]: 2025-10-09 10:05:04.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:05:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:05:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  9 10:05:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3984257628' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  9 10:05:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:05:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604418692' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:05:04 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17130 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:04 compute-0 ceph-mgr[4772]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:05:04 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T10:05:04.862+0000 7f49f65cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:05:04 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26701 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:04 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26975 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:05 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  9 10:05:05 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3683240453' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  9 10:05:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:05 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27002 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:05 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26725 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:05 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26752 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:05 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26755 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:05:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:05.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:05:05 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17202 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:05 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26782 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:05 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27050 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:06 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17229 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:06.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:06 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26806 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:06 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27068 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:05:06 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2569366303' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:05:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:06 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17256 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:05:06 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2593566875' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:05:06 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26836 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:06 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27092 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:06 compute-0 nova_compute[187439]: 2025-10-09 10:05:06.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:06 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17277 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:05:07 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539467632' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:05:07 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26863 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:07 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27122 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:07.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:07.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:07.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:07.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995941162s) [0] async=[0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 40'1059 active pruub 227.580154419s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995877266s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.580154419s@ mbc={}] exit Reset 0.000091 1 0.000162
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995877266s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.580154419s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995877266s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.580154419s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995877266s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.580154419s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995877266s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.580154419s@ mbc={}] exit Start 0.000073 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995877266s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.580154419s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.832800 1 0.000119
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.007779 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.009072 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.009224 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[62,68)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995309830s) [0] async=[0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 40'1059 active pruub 227.579925537s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995251656s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.579925537s@ mbc={}] exit Reset 0.000082 1 0.000144
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995251656s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.579925537s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995251656s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.579925537s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995251656s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.579925537s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995251656s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.579925537s@ mbc={}] exit Start 0.000044 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70 pruub=14.995251656s) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.579925537s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=69/70 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=68'1074 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003291 4 0.000181
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=68'1074 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003249 4 0.000101
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=68'1074 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=68'1074 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000038 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=68'1074 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=67/61 les/c/f=68/62/0 sis=69) [1] r=0 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/61 les/c/f=70/62/0 sis=69) [1] r=0 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003417 4 0.000300
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003342 4 0.000348
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/60 les/c/f=70/61/0 sis=69) [1] r=0 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/61 les/c/f=70/62/0 sis=69) [1] r=0 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/61 les/c/f=70/62/0 sis=69) [1] r=0 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000054 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 70 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/61 les/c/f=70/62/0 sis=69) [1] r=0 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fcb3f000/0x0/0x4ffc00000, data 0x73096/0xda000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1458176 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1d deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.670196 6 0.000179
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.670677 6 0.000172
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.671089 6 0.000144
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.671197 6 0.000056
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001131 2 0.000050
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001102 2 0.000085
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001172 2 0.000084
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001162 2 0.000163
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1d deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 DELETING pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073687 2 0.000198
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.074873 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.745182 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.6( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 DELETING pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.117978 2 0.000111
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.6( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.119146 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.6( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.790008 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.16( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 DELETING pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.147438 2 0.000112
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.16( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.148655 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.16( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.819903 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 DELETING pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.184460 2 0.000054
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.185674 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 71 pg[10.e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=68/69 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=-1 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.857046 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1441792 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fcb3f000/0x0/0x4ffc00000, data 0x73096/0xda000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1417216 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 609030 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1376256 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 71 handle_osd_map epochs [72,73], i have 71, src has [1,73]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1269760 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'42 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 32.253432 68 0.000157
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 32.260896 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 32.260937 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 32.260973 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74 pruub=15.746602058s) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 233.604583740s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74 pruub=15.746577263s) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.604583740s@ mbc={}] exit Reset 0.000052 1 0.000093
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74 pruub=15.746577263s) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.604583740s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74 pruub=15.746577263s) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.604583740s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74 pruub=15.746577263s) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.604583740s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74 pruub=15.746577263s) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.604583740s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74 pruub=15.746577263s) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.604583740s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18(unlocked)] enter Initial
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=0 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000031 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=0 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000142 1 0.000030
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000185 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8(unlocked)] enter Initial
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=0 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=0 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000138 1 0.000027
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000018 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000175 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1245184 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 74 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001675 2 0.000051
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001881 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001902 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000046 1 0.000076
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.18( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001336 2 0.000042
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001529 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001546 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74) [1] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000032 1 0.000056
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[10.8( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006387 7 0.000058
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000042 1 0.000030
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[6.8( v 41'42 (0'0,41'42] lb MIN local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=-1 lpr=74 DELETING pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002485 1 0.000021
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[6.8( v 41'42 (0'0,41'42] lb MIN local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002554 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 75 pg[6.8( v 41'42 (0'0,41'42] lb MIN local-lis/les=49/50 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=-1 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.008972 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1228800 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.073390961s of 10.199485779s, submitted: 162
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.18( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.003837 6 0.000289
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.18( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.18( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.8( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.003362 6 0.000023
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.8( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.8( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.18( v 40'1059 lc 40'122 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001996 3 0.000145
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.18( v 40'1059 lc 40'122 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.18( v 40'1059 lc 40'122 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000075 1 0.000032
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.18( v 40'1059 lc 40'122 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028680 1 0.000059
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.8( v 40'1059 lc 40'172 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.030778 3 0.000079
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.8( v 40'1059 lc 40'172 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.8( v 40'1059 lc 40'172 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000064 1 0.000069
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.8( v 40'1059 lc 40'172 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052710 1 0.000056
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.d deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.d deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 76 heartbeat osd_stat(store_statfs(0x4fcb2f000/0x0/0x4ffc00000, data 0x7f947/0xec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9(unlocked)] enter Initial
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=0 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=0 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000023
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000108 1 0.000051
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001218 2 0.000081
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 76 pg[6.9( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1261568 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648925 data_alloc: 218103808 data_used: 106496
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.678506 1 0.000176
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.709452 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 1.713326 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000051 1 0.000093
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.626551 1 0.000025
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.710207 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 1.713592 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=75) [1]/[0] r=-1 lpr=75 pi=[53,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000057 1 0.000078
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[6.9( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.150006 2 0.000043
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[6.9( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.151371 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[6.9( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=76/77 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=76/77 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=76/77 n=1 ec=49/14 lis/c=76/57 les/c/f=77/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001220 3 0.000194
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=76/77 n=1 ec=49/14 lis/c=76/57 les/c/f=77/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=76/77 n=1 ec=49/14 lis/c=76/57 les/c/f=77/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=76/77 n=1 ec=49/14 lis/c=76/57 les/c/f=77/58/0 sis=76) [1] r=0 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003238 2 0.000025
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004621 2 0.000150
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001299 2 0.000086
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=24
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=24
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001058 2 0.000060
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1253376 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997123 2 0.000111
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002888 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998211 2 0.000051
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002825 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/53 les/c/f=78/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000934 3 0.000182
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/53 les/c/f=78/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/53 les/c/f=78/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/53 les/c/f=78/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/53 les/c/f=78/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000971 3 0.000945
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/53 les/c/f=78/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/53 les/c/f=78/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 78 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/53 les/c/f=78/55/0 sis=77) [1] r=0 lpr=77 pi=[53,77)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1236992 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1212416 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1204224 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 78 handle_osd_map epochs [79,81], i have 78, src has [1,81]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b(unlocked)] enter Initial
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=0 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=0 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000023
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000090 1 0.000035
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.014674 46 0.000229
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.017684 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 23.018700 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 23.018718 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.985282898s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 active pruub 234.581420898s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.985259056s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] exit Reset 0.000047 3 0.000069
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.985259056s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.985259056s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.985259056s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.985259056s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.985259056s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.015098 46 0.000126
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.018391 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 23.020152 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 23.020174 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 79 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.984782219s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 active pruub 234.581420898s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.984754562s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] exit Reset 0.000042 3 0.000065
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.984754562s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.984754562s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.984754562s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.984754562s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] exit Start 0.000018 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79 pruub=9.984754562s) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.581420898s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001340 2 0.000031
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 81 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1064960 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 673732 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.985648 3 0.000022
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.985678 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000059 1 0.000085
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.985799 3 0.000036
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.985872 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=-1 lpr=79 pi=[62,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000156 1 0.000218
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000017 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986007 2 0.000042
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.987500 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001425 2 0.000345
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000653 2 0.000076
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000098 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=81/61 les/c/f=82/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.001974 5 0.000154
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=81/61 les/c/f=82/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=81/61 les/c/f=82/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000099 1 0.000102
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=81/61 les/c/f=82/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=81/61 les/c/f=82/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=81/61 les/c/f=82/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=81/61 les/c/f=82/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007600 1 0.000082
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=81/61 les/c/f=82/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=81/61 les/c/f=82/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=81/82 n=1 ec=49/14 lis/c=81/61 les/c/f=82/62/0 sis=81) [1] r=0 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fcb20000/0x0/0x4ffc00000, data 0x89e3d/0xfb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1015808 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 82 handle_osd_map epochs [83,83], i have 83, src has [1,83]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001619 3 0.000224
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002495 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001859 3 0.000077
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003373 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.002308 5 0.000260
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.002426 5 0.000285
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000252 1 0.000191
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000474 1 0.000046
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.063659 2 0.000067
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.064312 1 0.000130
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.015311 1 0.000048
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028386 2 0.000054
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.13 deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.13 deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 958464 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.904886 1 0.000160
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.015557 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.018963 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.018982 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.986342430s) [0] async=[0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 40'1059 active pruub 242.587265015s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.986305237s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587265015s@ mbc={}] exit Reset 0.000067 1 0.000107
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.986305237s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587265015s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.986305237s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587265015s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.986305237s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587265015s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.986305237s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587265015s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.986305237s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587265015s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.949598 1 0.000111
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.016556 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.019085 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.019128 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] async=[0] r=0 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.985722542s) [0] async=[0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 40'1059 active pruub 242.587631226s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.985552788s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587631226s@ mbc={}] exit Reset 0.000204 1 0.000282
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.985552788s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587631226s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.985552788s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587631226s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.985552788s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587631226s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.985552788s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587631226s@ mbc={}] exit Start 0.000096 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84 pruub=14.985552788s) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 242.587631226s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 860160 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.779906 6 0.000444
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.781349 6 0.000113
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000678 2 0.000183
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000809 2 0.000207
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 DELETING pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088841 2 0.000120
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.089720 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.870043 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 DELETING pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118323 2 0.000199
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.119123 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=-1 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.900635 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 753664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.451473236s of 10.533190727s, submitted: 91
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 745472 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668821 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fcb16000/0x0/0x4ffc00000, data 0x91d81/0x105000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 737280 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 729088 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 720896 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fcb17000/0x0/0x4ffc00000, data 0x91d81/0x105000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 85 handle_osd_map epochs [86,87], i have 85, src has [1,87]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 85 handle_osd_map epochs [86,87], i have 87, src has [1,87]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 21.938324 43 0.000158
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 21.941828 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 22.947739 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 22.947765 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.061434746s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 active pruub 244.586517334s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.061408043s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] exit Reset 0.000066 1 0.000108
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.061408043s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.061408043s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.061408043s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.061408043s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.061408043s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 21.939258 43 0.000251
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 21.942656 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 22.947974 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 22.947995 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=69) [1] r=0 lpr=69 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.060705185s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 active pruub 244.586517334s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.060690880s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] exit Reset 0.000029 1 0.000059
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.060690880s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.060690880s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.060690880s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.060690880s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 87 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87 pruub=10.060690880s) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 244.586517334s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 704512 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.443463 3 0.000028
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.443495 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000226 1 0.000250
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.443788 3 0.000028
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.443826 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=-1 lpr=87 pi=[69,87)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000097 1 0.000128
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004902 2 0.000095
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004612 2 0.000034
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 88 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 688128 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683331 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000850 3 0.000078
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005854 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000609 3 0.000135
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005348 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=69/70 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.004917 5 0.000501
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000176 1 0.000127
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.004689 5 0.000347
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.049000 1 0.000612
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035391 2 0.000050
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.084534 1 0.000186
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.036796 1 0.000044
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.056538 2 0.000069
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 655360 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.831525 1 0.000117
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014450 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.019813 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.019836 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.990008354s) [0] async=[0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 40'1059 active pruub 251.979660034s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.925341 1 0.000202
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.015240 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.021117 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.021144 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] async=[0] r=0 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.989096642s) [0] async=[0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 40'1059 active pruub 251.979141235s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.988950729s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979141235s@ mbc={}] exit Reset 0.000186 1 0.000248
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.988950729s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979141235s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.988950729s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979141235s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.988950729s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979141235s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.988950729s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979141235s@ mbc={}] exit Start 0.000052 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.988950729s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979141235s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.989036560s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979660034s@ mbc={}] exit Reset 0.001008 1 0.001069
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.989036560s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979660034s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.989036560s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979660034s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.989036560s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979660034s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.989036560s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979660034s@ mbc={}] exit Start 0.000008 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90 pruub=14.989036560s) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 251.979660034s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 90 heartbeat osd_stat(store_statfs(0x4fcb09000/0x0/0x4ffc00000, data 0x9a307/0x111000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 655360 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.518030 6 0.000109
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.518467 6 0.000152
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001078 2 0.000067
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001027 2 0.000408
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 DELETING pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.059373 2 0.000267
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.060650 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.579236 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 DELETING pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118372 2 0.000114
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.119519 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=-1 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.637879 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fcb07000/0x0/0x4ffc00000, data 0x9e15d/0x114000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fcb07000/0x0/0x4ffc00000, data 0x9e15d/0x114000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.054409027s of 10.098185539s, submitted: 52
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 675078 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fcb07000/0x0/0x4ffc00000, data 0x9e15d/0x114000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 91 ms_handle_reset con 0x563ba754d800 session 0x563ba91e90e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 91 ms_handle_reset con 0x563ba6bff400 session 0x563ba9199e00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 91 heartbeat osd_stat(store_statfs(0x4fcb07000/0x0/0x4ffc00000, data 0x9e15d/0x114000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=67) [1] r=0 lpr=67 crt=41'42 mlcod 41'42 active+clean] exit Started/Primary/Active/Clean 31.658634 65 0.000192
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=67) [1] r=0 lpr=67 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active 31.851846 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=67) [1] r=0 lpr=67 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary 32.607677 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=67) [1] r=0 lpr=67 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started 32.607703 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=67) [1] r=0 lpr=67 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=8.152629852s) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 41'42 active pruub 250.579452515s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=8.152599335s) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 250.579452515s@ mbc={}] exit Reset 0.000058 1 0.000101
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=8.152599335s) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 250.579452515s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=8.152599335s) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 250.579452515s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=8.152599335s) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 250.579452515s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=8.152599335s) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 250.579452515s@ mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 92 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92 pruub=8.152599335s) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 250.579452515s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 909312 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 92 heartbeat osd_stat(store_statfs(0x4fcb04000/0x0/0x4ffc00000, data 0xa03e4/0x117000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.610041 6 0.000041
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007845 3 0.000031
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.007871 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000052 1 0.000087
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] lb MIN local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 DELETING pi=[67,92)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008447 2 0.000111
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] lb MIN local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008546 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] lb MIN local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=-1 lpr=92 pi=[67,92)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started 0.626505 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fcb04000/0x0/0x4ffc00000, data 0xa03e4/0x117000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 892928 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fcb00000/0x0/0x4ffc00000, data 0xa2386/0x11a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f(unlocked)] enter Initial
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=0 pi=[61,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=0 pi=[61,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000038
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000114 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000152 1 0.000213
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000729 2 0.000333
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000028 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 94 pg[6.f( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 868352 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008938 2 0.000074
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.010050 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=94/61 les/c/f=95/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.001122 4 0.000635
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=94/61 les/c/f=95/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=94/61 les/c/f=95/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000048 1 0.000057
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=94/61 les/c/f=95/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=94/61 les/c/f=95/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000016 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=94/61 les/c/f=95/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=94/61 les/c/f=95/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126340 2 0.000084
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=94/61 les/c/f=95/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=94/61 les/c/f=95/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=94/95 n=3 ec=49/14 lis/c=94/61 les/c/f=95/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.12 deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.12 deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 851968 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 696711 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.19 deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.19 deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 851968 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 95 handle_osd_map epochs [96,97], i have 95, src has [1,97]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 811008 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 794624 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 786432 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 98 heartbeat osd_stat(store_statfs(0x4fcaf3000/0x0/0x4ffc00000, data 0xaa821/0x128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.a deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.a deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 786432 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 709283 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.771794319s of 10.810998917s, submitted: 47
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 786432 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 770048 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 99 heartbeat osd_stat(store_statfs(0x4fcaed000/0x0/0x4ffc00000, data 0xae8af/0x12e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 1761280 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 1753088 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1736704 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 729985 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 53.569855 114 0.001307
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 53.572879 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 54.573745 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 54.573785 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=62) [1] r=0 lpr=62 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103 pruub=10.431035995s) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 active pruub 266.582458496s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103 pruub=10.430859566s) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 266.582458496s@ mbc={}] exit Reset 0.000225 1 0.000380
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103 pruub=10.430859566s) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 266.582458496s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103 pruub=10.430859566s) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 266.582458496s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103 pruub=10.430859566s) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 266.582458496s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103 pruub=10.430859566s) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 266.582458496s@ mbc={}] exit Start 0.000116 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 103 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103 pruub=10.430859566s) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 266.582458496s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 103 handle_osd_map epochs [101,103], i have 103, src has [1,103]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 1736704 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.002973 3 0.000197
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.003233 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=-1 lpr=103 pi=[62,103)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000330 1 0.000480
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000086 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002299 2 0.000545
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000056 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 104 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 1720320 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003169 3 0.000174
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005650 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=62/63 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fcad9000/0x0/0x4ffc00000, data 0xbab1c/0x140000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1703936 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13(unlocked)] enter Initial
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=0 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=0 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000030
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000175 1 0.000056
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000222 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.470917 5 0.000534
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000062 1 0.000087
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000339 1 0.000098
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028330 2 0.000043
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.552330 2 0.000056
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.552574 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.552595 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=0 lpr=105 pi=[61,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000060 1 0.000087
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000018 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.523213 1 0.000067
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.023141 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.028820 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.029263 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] async=[2] r=0 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106 pruub=15.447549820s) [2] async=[2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 40'1059 active pruub 274.632110596s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106 pruub=15.447519302s) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 274.632110596s@ mbc={}] exit Reset 0.000050 1 0.000079
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106 pruub=15.447519302s) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 274.632110596s@ mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106 pruub=15.447519302s) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 274.632110596s@ mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106 pruub=15.447519302s) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 274.632110596s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106 pruub=15.447519302s) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 274.632110596s@ mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106 pruub=15.447519302s) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 274.632110596s@ mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 1622016 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14(unlocked)] enter Initial
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=0 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=0 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000029
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000159 1 0.000045
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.13( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.005181 5 0.000048
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.13( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.13( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000040 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.001133 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006907 7 0.000076
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000033 1 0.000050
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.13( v 40'1059 lc 40'297 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004984 4 0.000174
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.13( v 40'1059 lc 40'297 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.13( v 40'1059 lc 40'297 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000076 1 0.000031
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.13( v 40'1059 lc 40'297 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=-1 lpr=106 DELETING pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.031395 2 0.000130
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.031471 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=-1 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.038438 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.064524 1 0.000060
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.396591 2 0.000987
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.397751 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.327765 1 0.000062
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.397456 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 1.402690 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.397776 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=0 lpr=107 pi=[68,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000053 1 0.000123
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[61,106)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000165 1 0.000208
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001455 2 0.000036
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=30
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=30
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001111 2 0.000058
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 1581056 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 757087 data_alloc: 218103808 data_used: 106496
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.009899139s of 10.069742203s, submitted: 112
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999552 2 0.000124
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002206 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/61 les/c/f=109/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000826 3 0.000133
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/61 les/c/f=109/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/61 les/c/f=109/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/61 les/c/f=109/62/0 sis=108) [1] r=0 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.14( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.006018 6 0.000029
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.14( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.14( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.14( v 40'1059 lc 40'125 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002620 3 0.000092
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.14( v 40'1059 lc 40'125 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.14( v 40'1059 lc 40'125 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000033 1 0.000027
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.14( v 40'1059 lc 40'125 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 76046336 unmapped: 1507328 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.080176 1 0.000042
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.925234 1 0.000095
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.008211 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.014257 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[68,108)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000048 1 0.000080
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000033
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000999 3 0.000029
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 1482752 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005943 2 0.000044
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007011 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=110/111 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 111 handle_osd_map epochs [110,111], i have 111, src has [1,111]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=110/111 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=110/111 n=5 ec=53/34 lis/c=110/68 les/c/f=111/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001483 3 0.000355
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=110/111 n=5 ec=53/34 lis/c=110/68 les/c/f=111/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=110/111 n=5 ec=53/34 lis/c=110/68 les/c/f=111/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=110/111 n=5 ec=53/34 lis/c=110/68 les/c/f=111/69/0 sis=110) [1] r=0 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 1474560 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 111 heartbeat osd_stat(store_statfs(0x4fcacc000/0x0/0x4ffc00000, data 0xc4b7d/0x14f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.c scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.c scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 1466368 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.2 deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.2 deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 1449984 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 776644 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 111 ms_handle_reset con 0x563ba754d800 session 0x563ba917d680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 1449984 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1441792 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.b scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.b scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 1441792 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 111 handle_osd_map epochs [112,113], i have 111, src has [1,113]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 113 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xc6b1f/0x152000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.e deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.e deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 1425408 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 368640 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789446 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.963601112s of 10.011388779s, submitted: 49
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 360448 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 360448 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 352256 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fcac0000/0x0/0x4ffc00000, data 0xccde3/0x15b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 352256 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19(unlocked)] enter Initial
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=0 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=0 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000092 1 0.000033
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000134 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 116 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.884045 2 0.000049
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.884219 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.884279 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000369 1 0.000534
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000113 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 327680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 809264 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.004743 6 0.000492
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 40'189 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001574 3 0.000203
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 40'189 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 40'189 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000066 1 0.000083
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 40'189 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049863 1 0.000037
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 1368064 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fcab5000/0x0/0x4ffc00000, data 0xd2f8f/0x164000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.954766 1 0.000084
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.006457 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.011390 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000222 1 0.000295
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000094 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000214
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000738 3 0.000065
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1351680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995898 2 0.000110
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996784 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 120 handle_osd_map epochs [119,120], i have 120, src has [1,120]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=119/79 les/c/f=120/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001263 3 0.000091
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=119/79 les/c/f=120/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=119/79 les/c/f=120/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=119/79 les/c/f=120/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 1343488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 1343488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 1318912 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833800 data_alloc: 218103808 data_used: 106496
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.982757568s of 10.031836510s, submitted: 60
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 1302528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1294336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fcaae000/0x0/0x4ffc00000, data 0xd8f11/0x16e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fcaae000/0x0/0x4ffc00000, data 0xd8f11/0x16e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1294336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 1286144 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b(unlocked)] enter Initial
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=0 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=0 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000027
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000095 1 0.000045
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000190 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 122 handle_osd_map epochs [122,123], i have 123, src has [1,123]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.433377 2 0.000103
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.433589 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.433613 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000043 1 0.000073
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 1220608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852043 data_alloc: 218103808 data_used: 106496
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 1196032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.930692 5 0.000209
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001850 4 0.000087
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000037 1 0.000053
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.014743 1 0.000020
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 1196032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.256516 1 0.000021
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.273277 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.204261 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000662 1 0.000864
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000111 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000218
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:07 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Oct  9 10:05:07 compute-0 ceph-osd[12528]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000529 3 0.000055
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fca9c000/0x0/0x4ffc00000, data 0xe31ef/0x17e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 1171456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001474 2 0.000444
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002545 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 126 handle_osd_map epochs [125,126], i have 126, src has [1,126]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=125/84 les/c/f=126/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001393 3 0.000805
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=125/84 les/c/f=126/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=125/84 les/c/f=126/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000040 0 0.000000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=125/84 les/c/f=126/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Oct  9 10:05:07 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 1155072 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 1155072 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862978 data_alloc: 218103808 data_used: 110592
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 1155072 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 126 handle_osd_map epochs [127,128], i have 126, src has [1,128]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.363139153s of 10.403436661s, submitted: 44
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1130496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 1122304 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fca94000/0x0/0x4ffc00000, data 0xe93a3/0x187000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 1114112 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1064960 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877780 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 131 ms_handle_reset con 0x563ba754d400 session 0x563ba81cd4a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1064960 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fca8b000/0x0/0x4ffc00000, data 0xef454/0x190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 131 handle_osd_map epochs [132,133], i have 131, src has [1,133]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 131 handle_osd_map epochs [132,133], i have 133, src has [1,133]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 1048576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 1048576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884900 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884900 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.910986900s of 14.925504684s, submitted: 16
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884452 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 933888 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba96ebc00 session 0x563ba9dd0b40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba768d000 session 0x563ba831fe00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba813e800 session 0x563ba91e8d20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 925696 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885980 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 892928 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 892928 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 884736 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 884736 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885980 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.982664108s of 14.991124153s, submitted: 10
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 884736 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 876544 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 876544 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 868352 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 868352 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886080 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 843776 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 835584 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 835584 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 819200 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886096 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.519762039s of 10.531422615s, submitted: 11
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba754d000 session 0x563ba92434a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886080 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 712704 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 712704 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 688128 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 688128 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.465995789s of 29.472551346s, submitted: 5
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 540672 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 540672 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 540672 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 507904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 507904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 499712 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 499712 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 483328 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba754d400 session 0x563baa26cb40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 483328 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 475136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 475136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 475136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 458752 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 458752 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.456504822s of 50.457698822s, submitted: 1
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885241 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 425984 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 425984 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 425984 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 417792 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 417792 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885241 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 401408 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885241 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 401408 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 368640 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 360448 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.996906281s of 17.000619888s, submitted: 4
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 360448 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 344064 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 344064 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 335872 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 335872 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 335872 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 327680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 327680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 319488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 319488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 319488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 311296 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 311296 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 311296 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 303104 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 303104 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 294912 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 294912 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 294912 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 278528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 278528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 278528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 270336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba6bff400 session 0x563ba74dfa40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 270336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 262144 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 253952 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 245760 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 245760 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78372864 unmapped: 229376 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.124507904s of 34.125598907s, submitted: 1
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 212992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 212992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 212992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 204800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 204800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 196608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 196608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 180224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 163840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.921249390s of 14.924689293s, submitted: 3
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 180224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 180224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 163840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 163840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 155648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 155648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 155648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 147456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 147456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 139264 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 139264 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 122880 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 114688 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 114688 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 106496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 122880 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 122880 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 114688 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 106496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 98304 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 98304 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 90112 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 90112 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 90112 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 81920 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 81920 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 73728 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 73728 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 73728 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 65536 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 65536 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 57344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 57344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 57344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 49152 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 49152 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 49152 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 40960 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 32768 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 8192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 8192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 8192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 0 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 0 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1024000 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1024000 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1024000 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 999424 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 999424 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 925696 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 925696 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78733312 unmapped: 917504 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 909312 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 909312 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 892928 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 892928 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 892928 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 835584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 835584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 827392 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 827392 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 827392 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 819200 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 819200 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 811008 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 811008 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 811008 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6850 writes, 28K keys, 6850 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6850 writes, 1264 syncs, 5.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6850 writes, 28K keys, 6850 commit groups, 1.0 writes per commit group, ingest: 20.01 MB, 0.03 MB/s#012Interval WAL: 6850 writes, 1264 syncs, 5.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 745472 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 745472 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 737280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 737280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 729088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 729088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 729088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 720896 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 720896 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 712704 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78946304 unmapped: 704512 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78946304 unmapped: 704512 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 696320 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78954496 unmapped: 696320 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78962688 unmapped: 688128 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78962688 unmapped: 688128 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78970880 unmapped: 679936 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78970880 unmapped: 679936 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78970880 unmapped: 679936 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 671744 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 671744 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 663552 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 663552 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78987264 unmapped: 663552 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 655360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 655360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 655360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 647168 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 647168 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 647168 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 638976 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 638976 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 630784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 630784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 630784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 622592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 622592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 614400 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 614400 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 606208 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 606208 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 606208 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 598016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 598016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 598016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 630784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79020032 unmapped: 630784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 622592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 622592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 622592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 614400 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 614400 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79044608 unmapped: 606208 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 598016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79052800 unmapped: 598016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 589824 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 589824 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 581632 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 581632 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 581632 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 573440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 573440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 565248 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 565248 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 565248 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 557056 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 557056 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 557056 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 548864 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 548864 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 532480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 532480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 532480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 524288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 524288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 524288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 516096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 516096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79134720 unmapped: 516096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 499712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 499712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79151104 unmapped: 499712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 79159296 unmapped: 491520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 206.912719727s of 206.914962769s, submitted: 1
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 417792 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1376256 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1368064 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1359872 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1359872 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1359872 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80396288 unmapped: 1351680 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80404480 unmapped: 1343488 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80412672 unmapped: 1335296 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80429056 unmapped: 1318912 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1310720 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80437248 unmapped: 1310720 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80445440 unmapped: 1302528 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1294336 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1286144 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 1286144 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80470016 unmapped: 1277952 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1269760 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1269760 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1269760 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1269760 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1269760 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1269760 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1269760 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1269760 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 1269760 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80486400 unmapped: 1261568 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1253376 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1253376 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1253376 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1253376 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80494592 unmapped: 1253376 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80502784 unmapped: 1245184 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80510976 unmapped: 1236992 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80519168 unmapped: 1228800 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80527360 unmapped: 1220608 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 1212416 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 1204224 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1196032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1196032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1196032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1196032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1196032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1196032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1196032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80551936 unmapped: 1196032 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80560128 unmapped: 1187840 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba7551800 session 0x563ba756bc20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 1179648 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 1163264 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: mgrc ms_handle_reset ms_handle_reset con 0x563ba7552c00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3631142817
Oct  9 10:05:07 compute-0 ceph-osd[12528]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3631142817,v1:192.168.122.100:6801/3631142817]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: mgrc handle_mgr_configure stats_period=5
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 1056768 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80699392 unmapped: 1048576 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80707584 unmapped: 1040384 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 1032192 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80723968 unmapped: 1024000 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 1015808 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 1007616 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 999424 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 497.455413818s of 497.568878174s, submitted: 223
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 991232 heap: 81747968 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 888859 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 17620992 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 135 ms_handle_reset con 0x563ba754d000 session 0x563baa468f00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 17620992 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fc277000/0x0/0x4ffc00000, data 0x8fb7f9/0x9a3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952823 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954985 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc275000/0x0/0x4ffc00000, data 0x8fd7cb/0x9a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc275000/0x0/0x4ffc00000, data 0x8fd7cb/0x9a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954985 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc275000/0x0/0x4ffc00000, data 0x8fd7cb/0x9a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 17604608 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7350 writes, 29K keys, 7350 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7350 writes, 1505 syncs, 4.88 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 500 writes, 874 keys, 500 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s#012Interval WAL: 500 writes, 241 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563ba5b2f350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc275000/0x0/0x4ffc00000, data 0x8fd7cb/0x9a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 17571840 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 17571840 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 17571840 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954985 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 17571840 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 17571840 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc275000/0x0/0x4ffc00000, data 0x8fd7cb/0x9a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954985 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc275000/0x0/0x4ffc00000, data 0x8fd7cb/0x9a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc275000/0x0/0x4ffc00000, data 0x8fd7cb/0x9a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954985 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc275000/0x0/0x4ffc00000, data 0x8fd7cb/0x9a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954985 data_alloc: 218103808 data_used: 118784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc275000/0x0/0x4ffc00000, data 0x8fd7cb/0x9a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 17563648 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 17555456 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc275000/0x0/0x4ffc00000, data 0x8fd7cb/0x9a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 ms_handle_reset con 0x563ba754d400 session 0x563baa2450e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 17555456 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 122880
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 ms_handle_reset con 0x563ba813e800 session 0x563baa662960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 9527296 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.667663574s of 41.697948456s, submitted: 36
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 9527296 heap: 98533376 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 140 ms_handle_reset con 0x563ba7553000 session 0x563ba97403c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 11247616 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 11247616 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fbdd9000/0x0/0x4ffc00000, data 0xd969f7/0xe41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fbdd9000/0x0/0x4ffc00000, data 0xd969f7/0xe41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 11247616 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019089 data_alloc: 218103808 data_used: 6938624
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 90972160 unmapped: 11239424 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 140 ms_handle_reset con 0x563ba7552800 session 0x563ba92405a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 90750976 unmapped: 11460608 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 90800128 unmapped: 11411456 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0xd989c9/0xe44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 95051776 unmapped: 7159808 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0xd989c9/0xe44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 95051776 unmapped: 7159808 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052880 data_alloc: 234881024 data_used: 11530240
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 7143424 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 7143424 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 7143424 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0xd989c9/0xe44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 7143424 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 7143424 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052880 data_alloc: 234881024 data_used: 11530240
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 7143424 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 7143424 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 7143424 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0xd989c9/0xe44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.721317291s of 16.755655289s, submitted: 53
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 5685248 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb895000/0x0/0x4ffc00000, data 0x12db9c9/0x1387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96608256 unmapped: 5603328 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095822 data_alloc: 234881024 data_used: 11579392
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96608256 unmapped: 5603328 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96608256 unmapped: 5603328 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96608256 unmapped: 5603328 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb895000/0x0/0x4ffc00000, data 0x12db9c9/0x1387000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 5570560 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96436224 unmapped: 5775360 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096038 data_alloc: 234881024 data_used: 11579392
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 6070272 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 6070272 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 6070272 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb874000/0x0/0x4ffc00000, data 0x12fc9c9/0x13a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 6070272 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 6070272 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096038 data_alloc: 234881024 data_used: 11579392
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb874000/0x0/0x4ffc00000, data 0x12fc9c9/0x13a8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 6070272 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.332216263s of 13.379437447s, submitted: 52
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96256000 unmapped: 5955584 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb86b000/0x0/0x4ffc00000, data 0x13059c9/0x13b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 5947392 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb86b000/0x0/0x4ffc00000, data 0x13059c9/0x13b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 5947392 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 5947392 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1096310 data_alloc: 234881024 data_used: 11579392
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 5947392 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563baa63e780
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563baa63f860
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba813e800 session 0x563baa26d4a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba813fc00 session 0x563ba9e81a40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba813e400 session 0x563baa1eba40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 97067008 unmapped: 5144576 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563baa667a40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba9c085a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba813e800 session 0x563baa04a000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba813fc00 session 0x563ba9c09c20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba9ae9000 session 0x563ba9e56780
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563baa26c780
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 4890624 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb74a000/0x0/0x4ffc00000, data 0x14259d9/0x14d2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 4890624 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 4890624 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114703 data_alloc: 234881024 data_used: 12103680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 4890624 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 4890624 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.316731453s of 11.335366249s, submitted: 19
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba70fab40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 97329152 unmapped: 4882432 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 97378304 unmapped: 4833280 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb747000/0x0/0x4ffc00000, data 0x14289d9/0x14d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 3833856 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123848 data_alloc: 234881024 data_used: 13197312
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 98377728 unmapped: 3833856 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 3825664 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 3825664 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 3825664 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 3825664 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124184 data_alloc: 234881024 data_used: 13197312
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb747000/0x0/0x4ffc00000, data 0x14289d9/0x14d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 3825664 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 3825664 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 3825664 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 3825664 heap: 102211584 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.689122200s of 11.700855255s, submitted: 11
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb744000/0x0/0x4ffc00000, data 0x142b9d9/0x14d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 3735552 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174722 data_alloc: 234881024 data_used: 13201408
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 3424256 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103063552 unmapped: 3342336 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103104512 unmapped: 3301376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103104512 unmapped: 3301376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103104512 unmapped: 3301376 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179110 data_alloc: 234881024 data_used: 13201408
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9ea8000/0x0/0x4ffc00000, data 0x1b279d9/0x1bd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103112704 unmapped: 3293184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103112704 unmapped: 3293184 heap: 106405888 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9ea5000/0x0/0x4ffc00000, data 0x1b2a9d9/0x1bd7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 4153344 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103358464 unmapped: 4096000 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.971302986s of 10.158350945s, submitted: 287
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba813e800 session 0x563ba9d2eb40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba813fc00 session 0x563ba68ccf00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7551c00 session 0x563ba82d2f00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 5947392 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105066 data_alloc: 234881024 data_used: 12103680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 5947392 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa2af000/0x0/0x4ffc00000, data 0x13119c9/0x13bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 5947392 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa2af000/0x0/0x4ffc00000, data 0x13119c9/0x13bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 5947392 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 5947392 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 5947392 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105066 data_alloc: 234881024 data_used: 12103680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 5947392 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 5947392 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa2af000/0x0/0x4ffc00000, data 0x13119c9/0x13bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 101507072 unmapped: 5947392 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563baa245680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba983f680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563ba7688960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998792 data_alloc: 218103808 data_used: 7462912
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998792 data_alloc: 218103808 data_used: 7462912
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998792 data_alloc: 218103808 data_used: 7462912
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99467264 unmapped: 7987200 heap: 107454464 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba82d2960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba813e800 session 0x563ba9515860
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba97352c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba91dd0e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.638580322s of 24.683265686s, submitted: 64
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563ba9e83a40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba9e83e00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba813e800 session 0x563ba92474a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9e83c20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba9744000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99663872 unmapped: 13172736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1036436 data_alloc: 218103808 data_used: 7462912
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99663872 unmapped: 13172736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563ba97423c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99663872 unmapped: 13172736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa8cd000/0x0/0x4ffc00000, data 0xcf29d9/0xd9f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba9514b40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba813fc00 session 0x563ba9514000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba81f4b40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99983360 unmapped: 12853248 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa8a8000/0x0/0x4ffc00000, data 0xd169fc/0xdc4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 99983360 unmapped: 12853248 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 12451840 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060378 data_alloc: 234881024 data_used: 10256384
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa8a8000/0x0/0x4ffc00000, data 0xd169fc/0xdc4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 12451840 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 12451840 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 12451840 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa8a8000/0x0/0x4ffc00000, data 0xd169fc/0xdc4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 12451840 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 12451840 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060378 data_alloc: 234881024 data_used: 10256384
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa8a8000/0x0/0x4ffc00000, data 0xd169fc/0xdc4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 12451840 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa8a8000/0x0/0x4ffc00000, data 0xd169fc/0xdc4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 12451840 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 12443648 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.009076118s of 14.035502434s, submitted: 25
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 9641984 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 9191424 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118958 data_alloc: 234881024 data_used: 10727424
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 9191424 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa268000/0x0/0x4ffc00000, data 0x13549fc/0x1402000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa268000/0x0/0x4ffc00000, data 0x13549fc/0x1402000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 9191424 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 9191424 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 9191424 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103645184 unmapped: 9191424 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118958 data_alloc: 234881024 data_used: 10727424
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 8937472 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa24b000/0x0/0x4ffc00000, data 0x13739fc/0x1421000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 8937472 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 8937472 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 8937472 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8929280 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115598 data_alloc: 234881024 data_used: 10727424
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103907328 unmapped: 8929280 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.867507935s of 12.915572166s, submitted: 61
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 9535488 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa23e000/0x0/0x4ffc00000, data 0x13809fc/0x142e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 9535488 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 9535488 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 9535488 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115902 data_alloc: 234881024 data_used: 10727424
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 9535488 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103301120 unmapped: 9535488 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa23e000/0x0/0x4ffc00000, data 0x13809fc/0x142e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103137280 unmapped: 9699328 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103137280 unmapped: 9699328 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba9e82b40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563ba91e8960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103137280 unmapped: 9699328 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116682 data_alloc: 234881024 data_used: 10735616
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba70fa780
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 11862016 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 11862016 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 11862016 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 11862016 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 11862016 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008668 data_alloc: 218103808 data_used: 7462912
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 11862016 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 11862016 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 11862016 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 11862016 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 11853824 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008668 data_alloc: 218103808 data_used: 7462912
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 11853824 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 11853824 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 11853824 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 11853824 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 11853824 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008668 data_alloc: 218103808 data_used: 7462912
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 11845632 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 11845632 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 11845632 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 11845632 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 11845632 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008668 data_alloc: 218103808 data_used: 7462912
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7551000 session 0x563ba9241680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba91e9c20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba91dc000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563ba97434a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.187543869s of 29.218425751s, submitted: 38
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba973a960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7551400 session 0x563ba973a3c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9743680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba973a5a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563ba92472c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 12148736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 12148736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 12148736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 12148736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 12148736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048416 data_alloc: 218103808 data_used: 6938624
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xda09d9/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 12148736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 12148736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xda09d9/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 12148736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 12148736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba968ed20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xda09d9/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100687872 unmapped: 12148736 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048416 data_alloc: 218103808 data_used: 6938624
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0000 session 0x563ba9748960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa81f000/0x0/0x4ffc00000, data 0xda09d9/0xe4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9240d20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.343547821s of 10.354428291s, submitted: 7
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba9243680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0xda09e9/0xe4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 12132352 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 12132352 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 9871360 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 102965248 unmapped: 9871360 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0xda09e9/0xe4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 102998016 unmapped: 9838592 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082454 data_alloc: 234881024 data_used: 11239424
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 102998016 unmapped: 9838592 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 102998016 unmapped: 9838592 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 102998016 unmapped: 9838592 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0xda09e9/0xe4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0xda09e9/0xe4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103030784 unmapped: 9805824 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103030784 unmapped: 9805824 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082454 data_alloc: 234881024 data_used: 11239424
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103030784 unmapped: 9805824 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.586965561s of 10.588331223s, submitted: 1
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104701952 unmapped: 8134656 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 105218048 unmapped: 7618560 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 105218048 unmapped: 7618560 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa05b000/0x0/0x4ffc00000, data 0x15639e9/0x1611000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 105218048 unmapped: 7618560 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152796 data_alloc: 234881024 data_used: 11988992
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 105218048 unmapped: 7618560 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 105218048 unmapped: 7618560 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103407616 unmapped: 9428992 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa05b000/0x0/0x4ffc00000, data 0x15639e9/0x1611000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103448576 unmapped: 9388032 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103415808 unmapped: 9420800 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150092 data_alloc: 234881024 data_used: 12050432
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103415808 unmapped: 9420800 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103415808 unmapped: 9420800 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103415808 unmapped: 9420800 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa058000/0x0/0x4ffc00000, data 0x15669e9/0x1614000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103415808 unmapped: 9420800 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.929491043s of 12.976872444s, submitted: 68
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103415808 unmapped: 9420800 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150316 data_alloc: 234881024 data_used: 12050432
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa057000/0x0/0x4ffc00000, data 0x15679e9/0x1615000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa057000/0x0/0x4ffc00000, data 0x15679e9/0x1615000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103415808 unmapped: 9420800 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa057000/0x0/0x4ffc00000, data 0x15679e9/0x1615000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103448576 unmapped: 9388032 heap: 112836608 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0000 session 0x563ba90fab40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103751680 unmapped: 17481728 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103751680 unmapped: 17481728 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103751680 unmapped: 17481728 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182026 data_alloc: 234881024 data_used: 12050432
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103751680 unmapped: 17481728 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c33000/0x0/0x4ffc00000, data 0x198b9e9/0x1a39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103800832 unmapped: 17432576 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c33000/0x0/0x4ffc00000, data 0x198b9e9/0x1a39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103800832 unmapped: 17432576 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103800832 unmapped: 17432576 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 103800832 unmapped: 17432576 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182026 data_alloc: 234881024 data_used: 12050432
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104095744 unmapped: 17137664 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 13271040 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c33000/0x0/0x4ffc00000, data 0x198b9e9/0x1a39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107962368 unmapped: 13271040 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c33000/0x0/0x4ffc00000, data 0x198b9e9/0x1a39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c33000/0x0/0x4ffc00000, data 0x198b9e9/0x1a39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 13205504 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c33000/0x0/0x4ffc00000, data 0x198b9e9/0x1a39000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 13205504 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210602 data_alloc: 234881024 data_used: 16277504
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 13205504 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 13205504 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 13205504 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.442790985s of 19.454063416s, submitted: 5
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108101632 unmapped: 13131776 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108101632 unmapped: 13131776 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210874 data_alloc: 234881024 data_used: 16277504
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9c30000/0x0/0x4ffc00000, data 0x198c9e9/0x1a3a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 112287744 unmapped: 8945664 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 7831552 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 7831552 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 7831552 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 7831552 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296594 data_alloc: 234881024 data_used: 16523264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 7831552 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f922c000/0x0/0x4ffc00000, data 0x23849e9/0x2432000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 8003584 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 8003584 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0400 session 0x563ba968f860
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.969150543s of 10.044746399s, submitted: 104
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0800 session 0x563ba9d25e00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9237000/0x0/0x4ffc00000, data 0x23879e9/0x2435000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110297088 unmapped: 10936320 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa056000/0x0/0x4ffc00000, data 0x15689e9/0x1616000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110297088 unmapped: 10936320 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158708 data_alloc: 234881024 data_used: 12050432
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110297088 unmapped: 10936320 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563ba9748b40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba968f680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba70fa5a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026875 data_alloc: 218103808 data_used: 6414336
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026875 data_alloc: 218103808 data_used: 6414336
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026875 data_alloc: 218103808 data_used: 6414336
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17304 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 16252928 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba983e780
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0000 session 0x563ba91dc960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106119168 unmapped: 15114240 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026267 data_alloc: 218103808 data_used: 6938624
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.710817337s of 21.724245071s, submitted: 23
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba68cd0e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba9735e00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106799104 unmapped: 14434304 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106799104 unmapped: 14434304 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4faaea000/0x0/0x4ffc00000, data 0xad5a2b/0xb82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106799104 unmapped: 14434304 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4faaea000/0x0/0x4ffc00000, data 0xad5a2b/0xb82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106799104 unmapped: 14434304 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106799104 unmapped: 14434304 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044534 data_alloc: 218103808 data_used: 6938624
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106799104 unmapped: 14434304 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563ba97352c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106323968 unmapped: 14909440 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 14942208 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106291200 unmapped: 14942208 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4faac6000/0x0/0x4ffc00000, data 0xaf9a2b/0xba6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 15007744 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058122 data_alloc: 218103808 data_used: 8519680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4faac6000/0x0/0x4ffc00000, data 0xaf9a2b/0xba6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 15007744 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 15007744 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4faac6000/0x0/0x4ffc00000, data 0xaf9a2b/0xba6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 15007744 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 15007744 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 15007744 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058122 data_alloc: 218103808 data_used: 8519680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 15007744 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 15007744 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.783998489s of 17.812362671s, submitted: 26
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 11886592 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa8ff000/0x0/0x4ffc00000, data 0xcc0a2b/0xd6d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 11649024 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 11649024 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117678 data_alloc: 218103808 data_used: 9003008
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 11649024 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 11649024 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa532000/0x0/0x4ffc00000, data 0x108da2b/0x113a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 11649024 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 11649024 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109199360 unmapped: 12034048 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113646 data_alloc: 218103808 data_used: 9007104
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109199360 unmapped: 12034048 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109199360 unmapped: 12034048 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa50e000/0x0/0x4ffc00000, data 0x10b1a2b/0x115e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109199360 unmapped: 12034048 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109199360 unmapped: 12034048 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109199360 unmapped: 12034048 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113646 data_alloc: 218103808 data_used: 9007104
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109199360 unmapped: 12034048 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.744565010s of 13.812591553s, submitted: 84
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109207552 unmapped: 12025856 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109207552 unmapped: 12025856 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa508000/0x0/0x4ffc00000, data 0x10b7a2b/0x1164000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109207552 unmapped: 12025856 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109207552 unmapped: 12025856 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113862 data_alloc: 218103808 data_used: 9015296
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109207552 unmapped: 12025856 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109207552 unmapped: 12025856 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109314048 unmapped: 11919360 heap: 121233408 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa4f8000/0x0/0x4ffc00000, data 0x10c7a2b/0x1174000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d1400 session 0x563ba97332c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d1800 session 0x563ba97341e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9744b40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba91e8000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563ba9e574a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109510656 unmapped: 23273472 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109510656 unmapped: 23273472 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190127 data_alloc: 218103808 data_used: 9015296
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109510656 unmapped: 23273472 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9a2e000/0x0/0x4ffc00000, data 0x1b91a2b/0x1c3e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109510656 unmapped: 23273472 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109510656 unmapped: 23273472 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.894608498s of 11.923360825s, submitted: 20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109510656 unmapped: 23273472 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 15556608 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1266107 data_alloc: 234881024 data_used: 20021248
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 15548416 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 15548416 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9a2e000/0x0/0x4ffc00000, data 0x1b91a2b/0x1c3e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 15523840 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 15523840 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9a2e000/0x0/0x4ffc00000, data 0x1b91a2b/0x1c3e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 15491072 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265939 data_alloc: 234881024 data_used: 20021248
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 15441920 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 15441920 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9a27000/0x0/0x4ffc00000, data 0x1b98a2b/0x1c45000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 15441920 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.329229355s of 10.336941719s, submitted: 8
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f95d0000/0x0/0x4ffc00000, data 0x1fefa2b/0x209c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 13164544 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f95d0000/0x0/0x4ffc00000, data 0x1fefa2b/0x209c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 14016512 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307003 data_alloc: 234881024 data_used: 20086784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118775808 unmapped: 14008320 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118775808 unmapped: 14008320 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118775808 unmapped: 14008320 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118775808 unmapped: 14008320 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9564000/0x0/0x4ffc00000, data 0x205ba2b/0x2108000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118784000 unmapped: 14000128 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305007 data_alloc: 234881024 data_used: 20086784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9564000/0x0/0x4ffc00000, data 0x205ba2b/0x2108000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118784000 unmapped: 14000128 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9564000/0x0/0x4ffc00000, data 0x205ba2b/0x2108000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 13991936 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 13991936 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 13991936 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.994010925s of 11.025979996s, submitted: 33
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118792192 unmapped: 13991936 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305123 data_alloc: 234881024 data_used: 20086784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 13983744 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 13983744 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f955f000/0x0/0x4ffc00000, data 0x205fa2b/0x210c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 13918208 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 13918208 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f955b000/0x0/0x4ffc00000, data 0x2064a2b/0x2111000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f955b000/0x0/0x4ffc00000, data 0x2064a2b/0x2111000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 13910016 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305395 data_alloc: 234881024 data_used: 20086784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 13910016 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 13901824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 13901824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 13885440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 13885440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305443 data_alloc: 234881024 data_used: 20086784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9555000/0x0/0x4ffc00000, data 0x206aa2b/0x2117000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 13885440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 13885440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118898688 unmapped: 13885440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.847264290s of 13.856316566s, submitted: 8
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118939648 unmapped: 13844480 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 13836288 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305567 data_alloc: 234881024 data_used: 20086784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 13836288 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9552000/0x0/0x4ffc00000, data 0x206da2b/0x211a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 13836288 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9552000/0x0/0x4ffc00000, data 0x206da2b/0x211a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 13836288 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 13836288 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 13836288 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305643 data_alloc: 234881024 data_used: 20086784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 13836288 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118947840 unmapped: 13836288 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 13819904 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f954f000/0x0/0x4ffc00000, data 0x2070a2b/0x211d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 13819904 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 13819904 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305643 data_alloc: 234881024 data_used: 20086784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.673605919s of 11.678688049s, submitted: 4
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 13803520 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 13803520 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 13803520 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9549000/0x0/0x4ffc00000, data 0x2076a2b/0x2123000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 13803520 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 118988800 unmapped: 13795328 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305923 data_alloc: 234881024 data_used: 20086784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 13770752 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 13770752 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 13762560 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9543000/0x0/0x4ffc00000, data 0x2079a2b/0x2126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 13762560 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9543000/0x0/0x4ffc00000, data 0x2079a2b/0x2126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9543000/0x0/0x4ffc00000, data 0x2079a2b/0x2126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 13762560 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305947 data_alloc: 234881024 data_used: 20086784
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 13762560 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9543000/0x0/0x4ffc00000, data 0x2079a2b/0x2126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.460902214s of 11.469537735s, submitted: 6
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 13762560 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d1400 session 0x563ba91e8b40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d1c00 session 0x563ba9732960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 19816448 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 19816448 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 19816448 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123807 data_alloc: 218103808 data_used: 9011200
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 19816448 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa4cd000/0x0/0x4ffc00000, data 0x10f2a2b/0x119f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 19816448 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0400 session 0x563baa6132c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba9735680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 19816448 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9dd92c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa4c7000/0x0/0x4ffc00000, data 0x10f8a2b/0x11a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044661 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044661 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044661 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa830000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111403008 unmapped: 21381120 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba91dc000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7552800 session 0x563ba9e80960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba91dcb40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba9f454a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.976291656s of 23.016195297s, submitted: 50
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba97483c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0400 session 0x563ba81cc5a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d1400 session 0x563ba9dd94a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9e56b40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba973dc20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 21209088 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077947 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa86f000/0x0/0x4ffc00000, data 0xd509d9/0xdfd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 21209088 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 21209088 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba6ac03c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 20897792 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 20889600 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa84b000/0x0/0x4ffc00000, data 0xd749d9/0xe21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 20889600 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083943 data_alloc: 218103808 data_used: 7372800
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 20889600 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 20889600 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa84b000/0x0/0x4ffc00000, data 0xd749d9/0xe21000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0400 session 0x563baa40b0e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772000 session 0x563ba97374a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 20889600 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba968ed20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046883 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27140 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046883 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbd000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba68cc3c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba95152c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0400 session 0x563ba97450e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772400 session 0x563baa023a40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109191168 unmapped: 23592960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.422554016s of 19.451459885s, submitted: 14
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563baa0232c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba974b680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109535232 unmapped: 23248896 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108688 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109535232 unmapped: 23248896 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109535232 unmapped: 23248896 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa4c0000/0x0/0x4ffc00000, data 0x10ffa2b/0x11ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba974af00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109551616 unmapped: 23232512 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 109559808 unmapped: 23224320 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 21250048 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157349 data_alloc: 234881024 data_used: 13959168
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0400 session 0x563ba974a5a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772800 session 0x563ba973d2c0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111542272 unmapped: 21241856 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772800 session 0x563ba9748d20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fac9e000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fac9e000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053165 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fac9e000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053165 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fac9e000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053165 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 24125440 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9247680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba973c000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba9246d20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563baa3d0400 session 0x563ba9247a40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.288846970s of 24.343027115s, submitted: 66
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9247860
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba9732f00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563baa0234a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772800 session 0x563ba968f0e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772c00 session 0x563ba97361e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 24739840 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 24739840 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079116 data_alloc: 218103808 data_used: 6803456
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa9b6000/0x0/0x4ffc00000, data 0xc099d9/0xcb6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba973c960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 24739840 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba91e8b40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba91e9c20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772800 session 0x563ba91dde00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 24395776 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa9b6000/0x0/0x4ffc00000, data 0xc099d9/0xcb6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108396544 unmapped: 24387584 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa990000/0x0/0x4ffc00000, data 0xc2da0c/0xcdc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa990000/0x0/0x4ffc00000, data 0xc2da0c/0xcdc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 25509888 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 25509888 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108228 data_alloc: 218103808 data_used: 6971392
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa990000/0x0/0x4ffc00000, data 0xc2da0c/0xcdc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 25509888 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 25509888 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 25509888 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 25509888 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa990000/0x0/0x4ffc00000, data 0xc2da0c/0xcdc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 25509888 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108228 data_alloc: 218103808 data_used: 6971392
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa990000/0x0/0x4ffc00000, data 0xc2da0c/0xcdc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 25509888 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 25509888 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa990000/0x0/0x4ffc00000, data 0xc2da0c/0xcdc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.552720070s of 14.578207970s, submitted: 27
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 24944640 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108134400 unmapped: 24649728 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa847000/0x0/0x4ffc00000, data 0xd70a0c/0xe1f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108134400 unmapped: 24649728 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128366 data_alloc: 218103808 data_used: 7102464
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108134400 unmapped: 24649728 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa833000/0x0/0x4ffc00000, data 0xd7ca0c/0xe2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108134400 unmapped: 24649728 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108134400 unmapped: 24649728 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108134400 unmapped: 24649728 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa833000/0x0/0x4ffc00000, data 0xd7ca0c/0xe2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 24616960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128366 data_alloc: 218103808 data_used: 7102464
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 24616960 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 24608768 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 24608768 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa833000/0x0/0x4ffc00000, data 0xd7ca0c/0xe2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 24608768 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa833000/0x0/0x4ffc00000, data 0xd7ca0c/0xe2b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.740224838s of 11.761293411s, submitted: 27
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab773000 session 0x563ba9dd94a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab773400 session 0x563ba91dcb40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9c09a40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061655 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061655 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061655 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4facbc000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 25624576 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 25616384 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 25616384 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 25616384 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061655 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba9735680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba9734960
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772800 session 0x563ba973dc20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 25616384 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba973c1e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.614711761s of 16.654493332s, submitted: 48
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba9745c20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fab2d000/0x0/0x4ffc00000, data 0xa939c9/0xb3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 25600000 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 25600000 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fab2d000/0x0/0x4ffc00000, data 0xa939c9/0xb3f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 25600000 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 25600000 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075013 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563ba9dd9680
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 25288704 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107495424 unmapped: 25288704 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fab08000/0x0/0x4ffc00000, data 0xab79ec/0xb64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 25190400 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 25190400 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 25190400 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090726 data_alloc: 218103808 data_used: 5365760
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 25190400 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 25190400 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fab08000/0x0/0x4ffc00000, data 0xab79ec/0xb64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 25190400 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.768367767s of 12.777514458s, submitted: 8
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab773800 session 0x563ba70fbc20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108748800 unmapped: 24035328 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab773c00 session 0x563ba9dd8d20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 24018944 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134549 data_alloc: 218103808 data_used: 5369856
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa5a9000/0x0/0x4ffc00000, data 0x1015a4e/0x10c3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 23937024 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 21913600 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 19963904 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa161000/0x0/0x4ffc00000, data 0x143ea4e/0x14ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa161000/0x0/0x4ffc00000, data 0x143ea4e/0x14ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 19963904 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa161000/0x0/0x4ffc00000, data 0x143ea4e/0x14ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 17793024 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216779 data_alloc: 234881024 data_used: 10375168
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa161000/0x0/0x4ffc00000, data 0x143ea4e/0x14ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 17793024 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa161000/0x0/0x4ffc00000, data 0x143ea4e/0x14ec000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 17793024 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 114991104 unmapped: 17793024 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 18776064 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa17d000/0x0/0x4ffc00000, data 0x1441a4e/0x14ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 18776064 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209355 data_alloc: 234881024 data_used: 10375168
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 18776064 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 18776064 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 18776064 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.516884804s of 14.588496208s, submitted: 101
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 16728064 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 16293888 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274169 data_alloc: 234881024 data_used: 11034624
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1be5a4e/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 16211968 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 16203776 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 16203776 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1be5a4e/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 16203776 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 16203776 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1274169 data_alloc: 234881024 data_used: 11034624
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 16973824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f99d8000/0x0/0x4ffc00000, data 0x1be5a4e/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 16973824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f99d7000/0x0/0x4ffc00000, data 0x1be7a4e/0x1c95000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 16973824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 16973824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 16973824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271425 data_alloc: 234881024 data_used: 11038720
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f99d7000/0x0/0x4ffc00000, data 0x1be7a4e/0x1c95000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 16973824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.748860359s of 12.809944153s, submitted: 73
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 16973824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 16973824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 115810304 unmapped: 16973824 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9514b40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba91e8780
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 19152896 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142077 data_alloc: 218103808 data_used: 5873664
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa311000/0x0/0x4ffc00000, data 0xee39ec/0xf90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 19152896 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 19152896 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa311000/0x0/0x4ffc00000, data 0xee39ec/0xf90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113631232 unmapped: 19152896 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab773400 session 0x563ba92405a0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772800 session 0x563ba68ccf00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba7553000 session 0x563baa023e00
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077201 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa677000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa677000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077201 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa677000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077201 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa677000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa677000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077201 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba9732000
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba973da40
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772800 session 0x563ba97490e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 22167552 heap: 132784128 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab773400 session 0x563ba974a1e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.940811157s of 24.968006134s, submitted: 39
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab773800 session 0x563ba973c780
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563ba97341e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d400 session 0x563ba9dd8780
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab772800 session 0x563ba92470e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab773400 session 0x563baa244d20
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa017000/0x0/0x4ffc00000, data 0x1197a3b/0x1245000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 26288128 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa017000/0x0/0x4ffc00000, data 0x1197a3b/0x1245000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 26288128 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 26288128 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 26288128 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143196 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 23552000 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113434624 unmapped: 23552000 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa017000/0x0/0x4ffc00000, data 0x1197a3b/0x1245000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 23519232 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa017000/0x0/0x4ffc00000, data 0x1197a3b/0x1245000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 23486464 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 23486464 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196244 data_alloc: 234881024 data_used: 11587584
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa017000/0x0/0x4ffc00000, data 0x1197a3b/0x1245000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113532928 unmapped: 23453696 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113532928 unmapped: 23453696 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 113532928 unmapped: 23453696 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.853825569s of 12.886027336s, submitted: 31
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 20283392 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f99a5000/0x0/0x4ffc00000, data 0x1809a3b/0x18b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [1])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258540 data_alloc: 234881024 data_used: 12623872
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9996000/0x0/0x4ffc00000, data 0x1817a3b/0x18c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9996000/0x0/0x4ffc00000, data 0x1817a3b/0x18c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27146 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258540 data_alloc: 234881024 data_used: 12623872
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f9996000/0x0/0x4ffc00000, data 0x1817a3b/0x18c5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 20619264 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258540 data_alloc: 234881024 data_used: 12623872
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.192687988s of 11.236999512s, submitted: 61
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563bab773c00 session 0x563ba97450e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 ms_handle_reset con 0x563ba754d000 session 0x563baa6121e0
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 25174016 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 25165824 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 25165824 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 25165824 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 25165824 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 25165824 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 25165824 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111820800 unmapped: 25165824 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 25157632 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111828992 unmapped: 25157632 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 25149440 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 25149440 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 25149440 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 25149440 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 25149440 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 25149440 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 25141248 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 25141248 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 25141248 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 25141248 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 25141248 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 25141248 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 25141248 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 25141248 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111853568 unmapped: 25133056 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 25124864 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 25124864 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 25124864 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 25124864 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 25124864 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 25124864 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 25124864 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 25124864 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2853 syncs, 3.60 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2923 writes, 9704 keys, 2923 commit groups, 1.0 writes per commit group, ingest: 10.38 MB, 0.02 MB/s#012Interval WAL: 2923 writes, 1348 syncs, 2.17 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111869952 unmapped: 25116672 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111878144 unmapped: 25108480 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111886336 unmapped: 25100288 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111976448 unmapped: 25010176 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: do_command 'config diff' '{prefix=config diff}'
Oct  9 10:05:07 compute-0 ceph-osd[12528]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  9 10:05:07 compute-0 ceph-osd[12528]: do_command 'config show' '{prefix=config show}'
Oct  9 10:05:07 compute-0 ceph-osd[12528]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  9 10:05:07 compute-0 ceph-osd[12528]: do_command 'counter dump' '{prefix=counter dump}'
Oct  9 10:05:07 compute-0 ceph-osd[12528]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  9 10:05:07 compute-0 ceph-osd[12528]: do_command 'counter schema' '{prefix=counter schema}'
Oct  9 10:05:07 compute-0 ceph-osd[12528]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:07 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 24821760 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088264 data_alloc: 218103808 data_used: 3723264
Oct  9 10:05:07 compute-0 ceph-osd[12528]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fa566000/0x0/0x4ffc00000, data 0x9039c9/0x9af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Oct  9 10:05:07 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 25075712 heap: 136986624 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:07 compute-0 ceph-osd[12528]: do_command 'log dump' '{prefix=log dump}'
Oct  9 10:05:07 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26908 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:07 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27161 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:07 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27173 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:07 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  9 10:05:07 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236945637' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  9 10:05:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:05:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:07.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:05:08 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26941 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:08 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27194 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:08 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26956 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:08.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:08 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.26971 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:08 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27221 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:08 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17385 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:08 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17409 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:08.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:08.937Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:08.943Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:08.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:09 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17439 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:09 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:09 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17460 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:09 compute-0 nova_compute[187439]: 2025-10-09 10:05:09.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:09 compute-0 podman[206881]: 2025-10-09 10:05:09.669996854 +0000 UTC m=+0.108631562 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  9 10:05:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct  9 10:05:09 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1467015024' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  9 10:05:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct  9 10:05:09 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/991276937' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  9 10:05:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:05:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:09.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:05:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct  9 10:05:09 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2695539313' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  9 10:05:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct  9 10:05:09 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1217789133' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  9 10:05:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct  9 10:05:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624478517' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  9 10:05:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:05:10.118 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:05:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:05:10.119 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:05:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:05:10.119 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:05:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:10.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct  9 10:05:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4032395893' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct  9 10:05:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/212958408' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  9 10:05:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct  9 10:05:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1464335686' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  9 10:05:10 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27169 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct  9 10:05:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/15960154' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  9 10:05:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct  9 10:05:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550517676' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  9 10:05:10 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27178 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct  9 10:05:10 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2451567545' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  9 10:05:10 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27199 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct  9 10:05:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3056944751' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct  9 10:05:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1744172385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27440 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27428 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27446 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:05:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 10:05:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183017444' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27244 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:11 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27470 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct  9 10:05:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760469970' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:11 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27283 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:11 compute-0 nova_compute[187439]: 2025-10-09 10:05:11.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:11.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:12 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27298 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17691 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17682 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:05:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:12.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:05:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:12] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:05:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:12] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:05:12 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct  9 10:05:12 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225349446' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17706 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27548 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17718 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:12 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17724 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27572 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27361 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  9 10:05:12 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  9 10:05:12 compute-0 systemd[1]: Starting Hostname Service...
Oct  9 10:05:13 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17748 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:13 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  9 10:05:13 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  9 10:05:13 compute-0 systemd[1]: Started Hostname Service.
Oct  9 10:05:13 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27605 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:13 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27391 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct  9 10:05:13 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557993267' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  9 10:05:13 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17808 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:13.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:13.585Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:13.585Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:13.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:13 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17829 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:13 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27457 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:13 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct  9 10:05:13 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059809898' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  9 10:05:13 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27689 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:13.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:14 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17847 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct  9 10:05:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3101704029' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  9 10:05:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:14.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:14 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17865 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:14 compute-0 nova_compute[187439]: 2025-10-09 10:05:14.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:14 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct  9 10:05:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953610832' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  9 10:05:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  9 10:05:14 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  9 10:05:14 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17883 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct  9 10:05:15 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2696534015' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct  9 10:05:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct  9 10:05:15 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053053048' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  9 10:05:15 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27550 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:15 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27788 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:15 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17928 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:15.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct  9 10:05:16 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1329928561' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct  9 10:05:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:16.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct  9 10:05:16 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4274574662' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct  9 10:05:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:16 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27833 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:16 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27592 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct  9 10:05:16 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1952453794' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct  9 10:05:16 compute-0 nova_compute[187439]: 2025-10-09 10:05:16.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:17.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:17.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:17.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:17.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct  9 10:05:17 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880054577' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct  9 10:05:17 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27857 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:05:17 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27625 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:17 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.17988 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:17 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27869 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:17 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27643 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:17 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct  9 10:05:17 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139436647' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct  9 10:05:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:17.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct  9 10:05:18 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268401104' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct  9 10:05:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:18.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:18 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  9 10:05:18 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  9 10:05:18 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  9 10:05:18 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  9 10:05:18 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27899 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27905 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18030 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27914 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27697 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:18 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:05:18 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct  9 10:05:18 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735777596' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct  9 10:05:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:18.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:18.941Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:18.942Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:18.942Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18051 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:19 compute-0 nova_compute[187439]: 2025-10-09 10:05:19.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18066 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:05:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27959 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27965 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:19.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct  9 10:05:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/977502647' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct  9 10:05:19 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18084 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27754 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:20 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct  9 10:05:20 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381470307' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct  9 10:05:20 compute-0 podman[208640]: 2025-10-09 10:05:20.233744793 +0000 UTC m=+0.079936592 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 10:05:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:20.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18111 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18126 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:20 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:05:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct  9 10:05:21 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2681174729' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct  9 10:05:21 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28028 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:05:21 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28034 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:21 compute-0 ovs-appctl[209520]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  9 10:05:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct  9 10:05:21 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497053247' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct  9 10:05:21 compute-0 ovs-appctl[209530]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  9 10:05:21 compute-0 ovs-appctl[209544]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  9 10:05:21 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18165 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:21 compute-0 nova_compute[187439]: 2025-10-09 10:05:21.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:21.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:22 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18177 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:05:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:22] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:05:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:22] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:05:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:22.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 10:05:22 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/101271543' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 10:05:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct  9 10:05:22 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/363929979' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct  9 10:05:22 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Oct  9 10:05:22 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997421174' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct  9 10:05:22 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28097 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:23 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27871 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct  9 10:05:23 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180762565' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:23 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18234 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:23.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:23.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:23.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:23.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:23.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:23 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct  9 10:05:23 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534344203' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  9 10:05:23 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28136 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:24 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27913 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:24.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct  9 10:05:24 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2405584722' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct  9 10:05:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Oct  9 10:05:24 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4266558429' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct  9 10:05:24 compute-0 nova_compute[187439]: 2025-10-09 10:05:24.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:24 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28163 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:24 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct  9 10:05:24 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817141235' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:24 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27937 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:24 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28178 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct  9 10:05:25 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919356694' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27946 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18315 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28214 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct  9 10:05:25 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3632668552' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:25.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27985 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28223 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:25 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct  9 10:05:26 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183718478' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.27997 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:05:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:26.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18360 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Oct  9 10:05:26 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213647537' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct  9 10:05:26 compute-0 nova_compute[187439]: 2025-10-09 10:05:26.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28268 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:26 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18387 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:27.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:27.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:27.109Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:27.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:27 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28274 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:27 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28280 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:05:27 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18399 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:27 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28036 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct  9 10:05:27 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732266303' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:27 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct  9 10:05:27 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927175723' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct  9 10:05:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:27.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:27 compute-0 podman[211136]: 2025-10-09 10:05:27.987866729 +0000 UTC m=+0.050849144 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18429 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:28.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18435 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:28 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:05:28 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Oct  9 10:05:28 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527154993' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  9 10:05:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:28.935Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:28.944Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:28.944Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:28.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:29 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct  9 10:05:29 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1539775753' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct  9 10:05:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:29 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18450 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:29 compute-0 nova_compute[187439]: 2025-10-09 10:05:29.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:29 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18456 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  9 10:05:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:29.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:30.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:30 compute-0 virtqemud[187041]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  9 10:05:31 compute-0 systemd[1]: Starting Time & Date Service...
Oct  9 10:05:31 compute-0 systemd[1]: Started Time & Date Service.
Oct  9 10:05:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v975: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:05:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:31 compute-0 nova_compute[187439]: 2025-10-09 10:05:31.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:31.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:32] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:05:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:32] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:05:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:32.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v976: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:33.573Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:33.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:33.583Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:33.583Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:33.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:34.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:34 compute-0 podman[211900]: 2025-10-09 10:05:34.376391121 +0000 UTC m=+0.040011018 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:05:34 compute-0 nova_compute[187439]: 2025-10-09 10:05:34.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:05:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:05:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v977: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:35.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:36.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:36 compute-0 nova_compute[187439]: 2025-10-09 10:05:36.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:37.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:37.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:37.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:37.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v978: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:05:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:37.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:38.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:38.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:38.954Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:38.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:38.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v979: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:39 compute-0 nova_compute[187439]: 2025-10-09 10:05:39.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:39.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:40.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:40 compute-0 podman[211927]: 2025-10-09 10:05:40.598026069 +0000 UTC m=+0.040675652 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 10:05:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:05:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:41 compute-0 nova_compute[187439]: 2025-10-09 10:05:41.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:41.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:42] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Oct  9 10:05:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:42] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Oct  9 10:05:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:42.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:43.573Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:43.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:43.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:43.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:43.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:44.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:44 compute-0 nova_compute[187439]: 2025-10-09 10:05:44.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:45.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:46.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:46 compute-0 nova_compute[187439]: 2025-10-09 10:05:46.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:47.102Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:47.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:47.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:47.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:47 compute-0 nova_compute[187439]: 2025-10-09 10:05:47.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:05:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:05:47 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:05:47 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:47.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:48 compute-0 podman[212180]: 2025-10-09 10:05:48.284213084 +0000 UTC m=+0.029672724 container create 9328a6768f47cc1fd633ce45743573bab672ace984ef5f5226cdc1e33ed20dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_payne, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 10:05:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:48.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:48 compute-0 systemd[1]: Started libpod-conmon-9328a6768f47cc1fd633ce45743573bab672ace984ef5f5226cdc1e33ed20dc6.scope.
Oct  9 10:05:48 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:05:48 compute-0 podman[212180]: 2025-10-09 10:05:48.349642581 +0000 UTC m=+0.095102241 container init 9328a6768f47cc1fd633ce45743573bab672ace984ef5f5226cdc1e33ed20dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 10:05:48 compute-0 podman[212180]: 2025-10-09 10:05:48.355649006 +0000 UTC m=+0.101108646 container start 9328a6768f47cc1fd633ce45743573bab672ace984ef5f5226cdc1e33ed20dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 10:05:48 compute-0 podman[212180]: 2025-10-09 10:05:48.356750683 +0000 UTC m=+0.102210324 container attach 9328a6768f47cc1fd633ce45743573bab672ace984ef5f5226cdc1e33ed20dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_payne, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:05:48 compute-0 loving_payne[212193]: 167 167
Oct  9 10:05:48 compute-0 systemd[1]: libpod-9328a6768f47cc1fd633ce45743573bab672ace984ef5f5226cdc1e33ed20dc6.scope: Deactivated successfully.
Oct  9 10:05:48 compute-0 conmon[212193]: conmon 9328a6768f47cc1fd633 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9328a6768f47cc1fd633ce45743573bab672ace984ef5f5226cdc1e33ed20dc6.scope/container/memory.events
Oct  9 10:05:48 compute-0 podman[212180]: 2025-10-09 10:05:48.359802358 +0000 UTC m=+0.105262018 container died 9328a6768f47cc1fd633ce45743573bab672ace984ef5f5226cdc1e33ed20dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_payne, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:05:48 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:48 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:48 compute-0 podman[212180]: 2025-10-09 10:05:48.272419475 +0000 UTC m=+0.017879135 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3a41ea41314fcb4ee7ac25cc38b1cbf986d3aa5a949227702029e1ce00de293-merged.mount: Deactivated successfully.
Oct  9 10:05:48 compute-0 podman[212180]: 2025-10-09 10:05:48.388893664 +0000 UTC m=+0.134353303 container remove 9328a6768f47cc1fd633ce45743573bab672ace984ef5f5226cdc1e33ed20dc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_payne, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 10:05:48 compute-0 systemd[1]: libpod-conmon-9328a6768f47cc1fd633ce45743573bab672ace984ef5f5226cdc1e33ed20dc6.scope: Deactivated successfully.
Oct  9 10:05:48 compute-0 podman[212215]: 2025-10-09 10:05:48.52994152 +0000 UTC m=+0.032921579 container create 216d2ea2266e5dd7cd53b8555247b9678988e3175d9e42f987c5794c97a4cf03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_elgamal, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:05:48 compute-0 systemd[1]: Started libpod-conmon-216d2ea2266e5dd7cd53b8555247b9678988e3175d9e42f987c5794c97a4cf03.scope.
Oct  9 10:05:48 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e94e3f6922a003d44539a32b9237b3ca0c1d0f954770c03c54d7e907586cc2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e94e3f6922a003d44539a32b9237b3ca0c1d0f954770c03c54d7e907586cc2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e94e3f6922a003d44539a32b9237b3ca0c1d0f954770c03c54d7e907586cc2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e94e3f6922a003d44539a32b9237b3ca0c1d0f954770c03c54d7e907586cc2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:48 compute-0 podman[212215]: 2025-10-09 10:05:48.593459462 +0000 UTC m=+0.096439541 container init 216d2ea2266e5dd7cd53b8555247b9678988e3175d9e42f987c5794c97a4cf03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_elgamal, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:05:48 compute-0 podman[212215]: 2025-10-09 10:05:48.598589236 +0000 UTC m=+0.101569295 container start 216d2ea2266e5dd7cd53b8555247b9678988e3175d9e42f987c5794c97a4cf03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:05:48 compute-0 podman[212215]: 2025-10-09 10:05:48.599872105 +0000 UTC m=+0.102852183 container attach 216d2ea2266e5dd7cd53b8555247b9678988e3175d9e42f987c5794c97a4cf03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:05:48 compute-0 podman[212215]: 2025-10-09 10:05:48.515548079 +0000 UTC m=+0.018528158 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:05:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:05:48 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:05:48 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:48.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:48.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:48.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:48.945Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]: [
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:    {
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        "available": false,
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        "being_replaced": false,
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        "ceph_device_lvm": false,
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        "lsm_data": {},
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        "lvs": [],
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        "path": "/dev/sr0",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        "rejected_reasons": [
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "Has a FileSystem",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "Insufficient space (<5GB)"
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        ],
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        "sys_api": {
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "actuators": null,
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "device_nodes": [
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:                "sr0"
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            ],
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "devname": "sr0",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "human_readable_size": "474.00 KB",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "id_bus": "ata",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "model": "QEMU DVD-ROM",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "nr_requests": "64",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "parent": "/dev/sr0",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "partitions": {},
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "path": "/dev/sr0",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "removable": "1",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "rev": "2.5+",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "ro": "0",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "rotational": "0",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "sas_address": "",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "sas_device_handle": "",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "scheduler_mode": "mq-deadline",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "sectors": 0,
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "sectorsize": "2048",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "size": 485376.0,
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "support_discard": "2048",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "type": "disk",
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:            "vendor": "QEMU"
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:        }
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]:    }
Oct  9 10:05:49 compute-0 exciting_elgamal[212229]: ]
Oct  9 10:05:49 compute-0 systemd[1]: libpod-216d2ea2266e5dd7cd53b8555247b9678988e3175d9e42f987c5794c97a4cf03.scope: Deactivated successfully.
Oct  9 10:05:49 compute-0 podman[213451]: 2025-10-09 10:05:49.189543255 +0000 UTC m=+0.018511146 container died 216d2ea2266e5dd7cd53b8555247b9678988e3175d9e42f987c5794c97a4cf03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_elgamal, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Oct  9 10:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e94e3f6922a003d44539a32b9237b3ca0c1d0f954770c03c54d7e907586cc2f-merged.mount: Deactivated successfully.
Oct  9 10:05:49 compute-0 podman[213451]: 2025-10-09 10:05:49.215638522 +0000 UTC m=+0.044606403 container remove 216d2ea2266e5dd7cd53b8555247b9678988e3175d9e42f987c5794c97a4cf03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_elgamal, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 10:05:49 compute-0 systemd[1]: libpod-conmon-216d2ea2266e5dd7cd53b8555247b9678988e3175d9e42f987c5794c97a4cf03.scope: Deactivated successfully.
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:05:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:05:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:05:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:05:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:05:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:05:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:05:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:05:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 10:05:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:05:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:05:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:05:49 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:05:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:05:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:05:49 compute-0 nova_compute[187439]: 2025-10-09 10:05:49.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:05:49
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'images', '.nfs', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.meta', 'volumes']
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:05:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:05:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:05:49 compute-0 podman[213543]: 2025-10-09 10:05:49.693845621 +0000 UTC m=+0.028326294 container create 109eb4259a1da0526c7a734a065fffd7ff33e5072d57275607aea8e3ad1789f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:05:49 compute-0 systemd[1]: Started libpod-conmon-109eb4259a1da0526c7a734a065fffd7ff33e5072d57275607aea8e3ad1789f7.scope.
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:05:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:05:49 compute-0 podman[213543]: 2025-10-09 10:05:49.758950435 +0000 UTC m=+0.093431110 container init 109eb4259a1da0526c7a734a065fffd7ff33e5072d57275607aea8e3ad1789f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldstine, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 10:05:49 compute-0 podman[213543]: 2025-10-09 10:05:49.76320058 +0000 UTC m=+0.097681254 container start 109eb4259a1da0526c7a734a065fffd7ff33e5072d57275607aea8e3ad1789f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  9 10:05:49 compute-0 podman[213543]: 2025-10-09 10:05:49.764295244 +0000 UTC m=+0.098775918 container attach 109eb4259a1da0526c7a734a065fffd7ff33e5072d57275607aea8e3ad1789f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:05:49 compute-0 admiring_goldstine[213556]: 167 167
Oct  9 10:05:49 compute-0 systemd[1]: libpod-109eb4259a1da0526c7a734a065fffd7ff33e5072d57275607aea8e3ad1789f7.scope: Deactivated successfully.
Oct  9 10:05:49 compute-0 conmon[213556]: conmon 109eb4259a1da0526c7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-109eb4259a1da0526c7a734a065fffd7ff33e5072d57275607aea8e3ad1789f7.scope/container/memory.events
Oct  9 10:05:49 compute-0 podman[213543]: 2025-10-09 10:05:49.768611443 +0000 UTC m=+0.103092116 container died 109eb4259a1da0526c7a734a065fffd7ff33e5072d57275607aea8e3ad1789f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldstine, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:05:49 compute-0 podman[213543]: 2025-10-09 10:05:49.682784914 +0000 UTC m=+0.017265609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0a09a8bf1d9bae6f4a7123dda277d9bbbcfc96b52c32bb5a8e3e62cbb0c1b7c-merged.mount: Deactivated successfully.
Oct  9 10:05:49 compute-0 podman[213543]: 2025-10-09 10:05:49.790100493 +0000 UTC m=+0.124581168 container remove 109eb4259a1da0526c7a734a065fffd7ff33e5072d57275607aea8e3ad1789f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:05:49 compute-0 systemd[1]: libpod-conmon-109eb4259a1da0526c7a734a065fffd7ff33e5072d57275607aea8e3ad1789f7.scope: Deactivated successfully.
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:05:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:05:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:05:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:05:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:49.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:49 compute-0 podman[213577]: 2025-10-09 10:05:49.920456779 +0000 UTC m=+0.031613322 container create ee5a2c9738022b283e9d764a79c2eeea739e057accd3b46ced48f5c0c8398e0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 10:05:49 compute-0 systemd[1]: Started libpod-conmon-ee5a2c9738022b283e9d764a79c2eeea739e057accd3b46ced48f5c0c8398e0b.scope.
Oct  9 10:05:49 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8518809cd5ffccc8f28c8a458e6bcfac0d66912233b197017b07ee72a7a68c71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8518809cd5ffccc8f28c8a458e6bcfac0d66912233b197017b07ee72a7a68c71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8518809cd5ffccc8f28c8a458e6bcfac0d66912233b197017b07ee72a7a68c71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8518809cd5ffccc8f28c8a458e6bcfac0d66912233b197017b07ee72a7a68c71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8518809cd5ffccc8f28c8a458e6bcfac0d66912233b197017b07ee72a7a68c71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:49 compute-0 podman[213577]: 2025-10-09 10:05:49.977894697 +0000 UTC m=+0.089051240 container init ee5a2c9738022b283e9d764a79c2eeea739e057accd3b46ced48f5c0c8398e0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 10:05:49 compute-0 podman[213577]: 2025-10-09 10:05:49.98448797 +0000 UTC m=+0.095644513 container start ee5a2c9738022b283e9d764a79c2eeea739e057accd3b46ced48f5c0c8398e0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:05:49 compute-0 podman[213577]: 2025-10-09 10:05:49.98555435 +0000 UTC m=+0.096710893 container attach ee5a2c9738022b283e9d764a79c2eeea739e057accd3b46ced48f5c0c8398e0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_sinoussi, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:05:50 compute-0 podman[213577]: 2025-10-09 10:05:49.908025749 +0000 UTC m=+0.019182312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:05:50 compute-0 intelligent_sinoussi[213590]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:05:50 compute-0 intelligent_sinoussi[213590]: --> All data devices are unavailable
Oct  9 10:05:50 compute-0 systemd[1]: libpod-ee5a2c9738022b283e9d764a79c2eeea739e057accd3b46ced48f5c0c8398e0b.scope: Deactivated successfully.
Oct  9 10:05:50 compute-0 podman[213606]: 2025-10-09 10:05:50.283109283 +0000 UTC m=+0.023751248 container died ee5a2c9738022b283e9d764a79c2eeea739e057accd3b46ced48f5c0c8398e0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_sinoussi, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 10:05:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8518809cd5ffccc8f28c8a458e6bcfac0d66912233b197017b07ee72a7a68c71-merged.mount: Deactivated successfully.
Oct  9 10:05:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.004000042s ======
Oct  9 10:05:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:50.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000042s
Oct  9 10:05:50 compute-0 podman[213606]: 2025-10-09 10:05:50.334997536 +0000 UTC m=+0.075639481 container remove ee5a2c9738022b283e9d764a79c2eeea739e057accd3b46ced48f5c0c8398e0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:05:50 compute-0 systemd[1]: libpod-conmon-ee5a2c9738022b283e9d764a79c2eeea739e057accd3b46ced48f5c0c8398e0b.scope: Deactivated successfully.
Oct  9 10:05:50 compute-0 podman[213607]: 2025-10-09 10:05:50.408824948 +0000 UTC m=+0.140254882 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct  9 10:05:50 compute-0 podman[213726]: 2025-10-09 10:05:50.793131354 +0000 UTC m=+0.030156846 container create 4432e5ae610225163716e0eeb36c5b8db115f05c63f2edece8a7320d7ffb3d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 10:05:50 compute-0 systemd[1]: Started libpod-conmon-4432e5ae610225163716e0eeb36c5b8db115f05c63f2edece8a7320d7ffb3d42.scope.
Oct  9 10:05:50 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:05:50 compute-0 podman[213726]: 2025-10-09 10:05:50.845610651 +0000 UTC m=+0.082636134 container init 4432e5ae610225163716e0eeb36c5b8db115f05c63f2edece8a7320d7ffb3d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 10:05:50 compute-0 podman[213726]: 2025-10-09 10:05:50.850936314 +0000 UTC m=+0.087961796 container start 4432e5ae610225163716e0eeb36c5b8db115f05c63f2edece8a7320d7ffb3d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_stonebraker, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:05:50 compute-0 elegant_stonebraker[213739]: 167 167
Oct  9 10:05:50 compute-0 systemd[1]: libpod-4432e5ae610225163716e0eeb36c5b8db115f05c63f2edece8a7320d7ffb3d42.scope: Deactivated successfully.
Oct  9 10:05:50 compute-0 podman[213726]: 2025-10-09 10:05:50.852108364 +0000 UTC m=+0.089133846 container attach 4432e5ae610225163716e0eeb36c5b8db115f05c63f2edece8a7320d7ffb3d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_stonebraker, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Oct  9 10:05:50 compute-0 podman[213726]: 2025-10-09 10:05:50.856240607 +0000 UTC m=+0.093266088 container died 4432e5ae610225163716e0eeb36c5b8db115f05c63f2edece8a7320d7ffb3d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_stonebraker, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 10:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-858d3e58f39f2d09360728ab4726f94d97d4dc7039fc0967244af71d6de2e3f1-merged.mount: Deactivated successfully.
Oct  9 10:05:50 compute-0 podman[213726]: 2025-10-09 10:05:50.875831839 +0000 UTC m=+0.112857321 container remove 4432e5ae610225163716e0eeb36c5b8db115f05c63f2edece8a7320d7ffb3d42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_stonebraker, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:05:50 compute-0 podman[213726]: 2025-10-09 10:05:50.780866477 +0000 UTC m=+0.017891979 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:05:50 compute-0 systemd[1]: libpod-conmon-4432e5ae610225163716e0eeb36c5b8db115f05c63f2edece8a7320d7ffb3d42.scope: Deactivated successfully.
Oct  9 10:05:51 compute-0 podman[213760]: 2025-10-09 10:05:51.000078453 +0000 UTC m=+0.027984770 container create 20f9a7626c4607475d44f7be3109fe5c7f18bfa984aecb8e1a008e108274082b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:05:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:51 compute-0 systemd[1]: Started libpod-conmon-20f9a7626c4607475d44f7be3109fe5c7f18bfa984aecb8e1a008e108274082b.scope.
Oct  9 10:05:51 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e986480dc4ed8583731244382cb099bf8818595e300ceae6e1e724accca642c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e986480dc4ed8583731244382cb099bf8818595e300ceae6e1e724accca642c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e986480dc4ed8583731244382cb099bf8818595e300ceae6e1e724accca642c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e986480dc4ed8583731244382cb099bf8818595e300ceae6e1e724accca642c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:51 compute-0 podman[213760]: 2025-10-09 10:05:51.062187449 +0000 UTC m=+0.090093785 container init 20f9a7626c4607475d44f7be3109fe5c7f18bfa984aecb8e1a008e108274082b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hermann, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Oct  9 10:05:51 compute-0 podman[213760]: 2025-10-09 10:05:51.066553601 +0000 UTC m=+0.094459919 container start 20f9a7626c4607475d44f7be3109fe5c7f18bfa984aecb8e1a008e108274082b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 10:05:51 compute-0 podman[213760]: 2025-10-09 10:05:51.069028088 +0000 UTC m=+0.096934404 container attach 20f9a7626c4607475d44f7be3109fe5c7f18bfa984aecb8e1a008e108274082b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 10:05:51 compute-0 podman[213760]: 2025-10-09 10:05:50.989082259 +0000 UTC m=+0.016988596 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:05:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]: {
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:    "1": [
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:        {
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "devices": [
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "/dev/loop3"
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            ],
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "lv_name": "ceph_lv0",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "lv_size": "21470642176",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "name": "ceph_lv0",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "tags": {
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.cluster_name": "ceph",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.crush_device_class": "",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.encrypted": "0",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.osd_id": "1",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.type": "block",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.vdo": "0",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:                "ceph.with_tpm": "0"
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            },
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "type": "block",
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:            "vg_name": "ceph_vg0"
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:        }
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]:    ]
Oct  9 10:05:51 compute-0 wonderful_hermann[213773]: }
Oct  9 10:05:51 compute-0 systemd[1]: libpod-20f9a7626c4607475d44f7be3109fe5c7f18bfa984aecb8e1a008e108274082b.scope: Deactivated successfully.
Oct  9 10:05:51 compute-0 podman[213760]: 2025-10-09 10:05:51.35040773 +0000 UTC m=+0.378314047 container died 20f9a7626c4607475d44f7be3109fe5c7f18bfa984aecb8e1a008e108274082b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hermann, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e986480dc4ed8583731244382cb099bf8818595e300ceae6e1e724accca642c0-merged.mount: Deactivated successfully.
Oct  9 10:05:51 compute-0 podman[213760]: 2025-10-09 10:05:51.373601205 +0000 UTC m=+0.401507522 container remove 20f9a7626c4607475d44f7be3109fe5c7f18bfa984aecb8e1a008e108274082b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:05:51 compute-0 systemd[1]: libpod-conmon-20f9a7626c4607475d44f7be3109fe5c7f18bfa984aecb8e1a008e108274082b.scope: Deactivated successfully.
Oct  9 10:05:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:51 compute-0 podman[213873]: 2025-10-09 10:05:51.858281821 +0000 UTC m=+0.030725830 container create 36591e836252d01c650f32956553872c95c3ee61b4f94dec65db4c913da87515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:05:51 compute-0 nova_compute[187439]: 2025-10-09 10:05:51.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:51 compute-0 systemd[1]: Started libpod-conmon-36591e836252d01c650f32956553872c95c3ee61b4f94dec65db4c913da87515.scope.
Oct  9 10:05:51 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:05:51 compute-0 podman[213873]: 2025-10-09 10:05:51.914343753 +0000 UTC m=+0.086787782 container init 36591e836252d01c650f32956553872c95c3ee61b4f94dec65db4c913da87515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:05:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:51.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:51 compute-0 podman[213873]: 2025-10-09 10:05:51.919667622 +0000 UTC m=+0.092111632 container start 36591e836252d01c650f32956553872c95c3ee61b4f94dec65db4c913da87515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:05:51 compute-0 podman[213873]: 2025-10-09 10:05:51.920765603 +0000 UTC m=+0.093209612 container attach 36591e836252d01c650f32956553872c95c3ee61b4f94dec65db4c913da87515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  9 10:05:51 compute-0 optimistic_black[213886]: 167 167
Oct  9 10:05:51 compute-0 systemd[1]: libpod-36591e836252d01c650f32956553872c95c3ee61b4f94dec65db4c913da87515.scope: Deactivated successfully.
Oct  9 10:05:51 compute-0 podman[213873]: 2025-10-09 10:05:51.923331822 +0000 UTC m=+0.095775831 container died 36591e836252d01c650f32956553872c95c3ee61b4f94dec65db4c913da87515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 10:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-49c71bff54459bb6515db828e69d831cfb5fcfb1350db1ca0a52e74821ccd080-merged.mount: Deactivated successfully.
Oct  9 10:05:51 compute-0 podman[213873]: 2025-10-09 10:05:51.845407885 +0000 UTC m=+0.017851904 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:05:51 compute-0 podman[213873]: 2025-10-09 10:05:51.954840967 +0000 UTC m=+0.127284976 container remove 36591e836252d01c650f32956553872c95c3ee61b4f94dec65db4c913da87515 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:05:51 compute-0 systemd[1]: libpod-conmon-36591e836252d01c650f32956553872c95c3ee61b4f94dec65db4c913da87515.scope: Deactivated successfully.
Oct  9 10:05:52 compute-0 podman[213907]: 2025-10-09 10:05:52.087898767 +0000 UTC m=+0.036725252 container create 2b521b23b3943c4e13b86281625edbda22d7e5b17d8cf6dabecd7b692d5fa796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elbakyan, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 10:05:52 compute-0 systemd[1]: Started libpod-conmon-2b521b23b3943c4e13b86281625edbda22d7e5b17d8cf6dabecd7b692d5fa796.scope.
Oct  9 10:05:52 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:05:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd015bfaf124f4a6a297cabb9a19be8b7097021ac3adf63ebad4264013225f4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd015bfaf124f4a6a297cabb9a19be8b7097021ac3adf63ebad4264013225f4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd015bfaf124f4a6a297cabb9a19be8b7097021ac3adf63ebad4264013225f4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd015bfaf124f4a6a297cabb9a19be8b7097021ac3adf63ebad4264013225f4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:52 compute-0 podman[213907]: 2025-10-09 10:05:52.155825683 +0000 UTC m=+0.104652168 container init 2b521b23b3943c4e13b86281625edbda22d7e5b17d8cf6dabecd7b692d5fa796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:05:52 compute-0 podman[213907]: 2025-10-09 10:05:52.16145123 +0000 UTC m=+0.110277716 container start 2b521b23b3943c4e13b86281625edbda22d7e5b17d8cf6dabecd7b692d5fa796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elbakyan, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 10:05:52 compute-0 podman[213907]: 2025-10-09 10:05:52.164177131 +0000 UTC m=+0.113003635 container attach 2b521b23b3943c4e13b86281625edbda22d7e5b17d8cf6dabecd7b692d5fa796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elbakyan, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:05:52 compute-0 podman[213907]: 2025-10-09 10:05:52.069492629 +0000 UTC m=+0.018319134 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:05:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:52] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:05:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:05:52] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:05:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:52.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:52 compute-0 lvm[213996]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:05:52 compute-0 lvm[213996]: VG ceph_vg0 finished
Oct  9 10:05:52 compute-0 xenodochial_elbakyan[213921]: {}
Oct  9 10:05:52 compute-0 lvm[213999]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:05:52 compute-0 lvm[213999]: VG ceph_vg0 finished
Oct  9 10:05:52 compute-0 podman[213907]: 2025-10-09 10:05:52.660777934 +0000 UTC m=+0.609604419 container died 2b521b23b3943c4e13b86281625edbda22d7e5b17d8cf6dabecd7b692d5fa796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elbakyan, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:05:52 compute-0 systemd[1]: libpod-2b521b23b3943c4e13b86281625edbda22d7e5b17d8cf6dabecd7b692d5fa796.scope: Deactivated successfully.
Oct  9 10:05:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd015bfaf124f4a6a297cabb9a19be8b7097021ac3adf63ebad4264013225f4b-merged.mount: Deactivated successfully.
Oct  9 10:05:52 compute-0 podman[213907]: 2025-10-09 10:05:52.683435648 +0000 UTC m=+0.632262133 container remove 2b521b23b3943c4e13b86281625edbda22d7e5b17d8cf6dabecd7b692d5fa796 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:05:52 compute-0 systemd[1]: libpod-conmon-2b521b23b3943c4e13b86281625edbda22d7e5b17d8cf6dabecd7b692d5fa796.scope: Deactivated successfully.
Oct  9 10:05:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:05:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:52 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:05:52 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:05:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:53.574Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:53.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:53.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:53.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:53 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:53.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:54.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:54 compute-0 nova_compute[187439]: 2025-10-09 10:05:54.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:05:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:55.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:56 compute-0 nova_compute[187439]: 2025-10-09 10:05:56.256 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:56.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:56 compute-0 nova_compute[187439]: 2025-10-09 10:05:56.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:57.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:57.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:57.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:57.113Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:05:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:57.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:58 compute-0 podman[214065]: 2025-10-09 10:05:58.078338393 +0000 UTC m=+0.053879238 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.245 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.265 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.265 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:05:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:58.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:05:58 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831269240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.615 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.808 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.809 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4453MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.810 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.810 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.926 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.927 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:05:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:58.938Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:58.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:58.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:05:58.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:05:58 compute-0 nova_compute[187439]: 2025-10-09 10:05:58.992 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Refreshing inventories for resource provider f97cf330-2912-473f-81a8-cda2f8811838 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.054 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Updating ProviderTree inventory for provider f97cf330-2912-473f-81a8-cda2f8811838 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.055 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Updating inventory in ProviderTree for provider f97cf330-2912-473f-81a8-cda2f8811838 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.066 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Refreshing aggregate associations for resource provider f97cf330-2912-473f-81a8-cda2f8811838, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.084 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Refreshing trait associations for resource provider f97cf330-2912-473f-81a8-cda2f8811838, traits: HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX2,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX512VAES,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSSE3,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.100 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:05:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:05:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:05:59 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111307278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.448 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.452 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.463 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.464 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.464 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.465 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.465 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.473 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  9 10:05:59 compute-0 nova_compute[187439]: 2025-10-09 10:05:59.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:05:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:59.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:05:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:00.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:01 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 10:06:01 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 10:06:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:06:01 compute-0 nova_compute[187439]: 2025-10-09 10:06:01.474 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:01 compute-0 nova_compute[187439]: 2025-10-09 10:06:01.475 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:01 compute-0 nova_compute[187439]: 2025-10-09 10:06:01.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:01.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:02 compute-0 nova_compute[187439]: 2025-10-09 10:06:02.242 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:02 compute-0 nova_compute[187439]: 2025-10-09 10:06:02.245 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:02] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:06:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:02] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:06:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:02.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:03 compute-0 nova_compute[187439]: 2025-10-09 10:06:03.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:03 compute-0 nova_compute[187439]: 2025-10-09 10:06:03.248 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  9 10:06:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:06:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:03.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:03.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:03.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:03.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:04 compute-0 nova_compute[187439]: 2025-10-09 10:06:04.266 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:04 compute-0 nova_compute[187439]: 2025-10-09 10:06:04.266 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:06:04 compute-0 nova_compute[187439]: 2025-10-09 10:06:04.266 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:06:04 compute-0 nova_compute[187439]: 2025-10-09 10:06:04.281 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:06:04 compute-0 nova_compute[187439]: 2025-10-09 10:06:04.281 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:04.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:04 compute-0 nova_compute[187439]: 2025-10-09 10:06:04.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:06:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:06:04 compute-0 podman[214138]: 2025-10-09 10:06:04.618708193 +0000 UTC m=+0.050229697 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:06:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:05 compute-0 nova_compute[187439]: 2025-10-09 10:06:05.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:05 compute-0 nova_compute[187439]: 2025-10-09 10:06:05.247 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:06:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:06:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:06.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:06 compute-0 nova_compute[187439]: 2025-10-09 10:06:06.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:07.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:07.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:07.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:07.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:07 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:06:07 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:07 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:07 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:07.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:08.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:08.938Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:08.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:08.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:08.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:09 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v995: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:06:09 compute-0 nova_compute[187439]: 2025-10-09 10:06:09.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:09 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:09 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:09 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:09.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:06:10.118 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:06:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:06:10.119 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:06:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:06:10.119 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:06:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:10.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:10 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Oct  9 10:06:10 compute-0 systemd[1]: session-41.scope: Consumed 2min 11.793s CPU time, 728.5M memory peak, read 230.2M from disk, written 229.2M to disk.
Oct  9 10:06:10 compute-0 systemd-logind[798]: Session 41 logged out. Waiting for processes to exit.
Oct  9 10:06:10 compute-0 systemd-logind[798]: Removed session 41.
Oct  9 10:06:10 compute-0 systemd-logind[798]: New session 43 of user zuul.
Oct  9 10:06:10 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct  9 10:06:10 compute-0 podman[214162]: 2025-10-09 10:06:10.751398944 +0000 UTC m=+0.068223457 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd)
Oct  9 10:06:10 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct  9 10:06:10 compute-0 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Oct  9 10:06:10 compute-0 systemd-logind[798]: Removed session 43.
Oct  9 10:06:10 compute-0 systemd-logind[798]: New session 44 of user zuul.
Oct  9 10:06:10 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct  9 10:06:11 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Oct  9 10:06:11 compute-0 systemd-logind[798]: Session 44 logged out. Waiting for processes to exit.
Oct  9 10:06:11 compute-0 systemd-logind[798]: Removed session 44.
Oct  9 10:06:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:06:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:11 compute-0 nova_compute[187439]: 2025-10-09 10:06:11.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:06:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2347622312' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:06:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:06:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2347622312' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:06:11 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:11 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:11 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:11.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:12] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:06:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:12] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:06:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:12.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v997: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:06:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:13.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:13.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:13.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:13.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:13 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:13 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:13 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:14.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:14 compute-0 nova_compute[187439]: 2025-10-09 10:06:14.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v998: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:06:15 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:15 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:15 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:16.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:16 compute-0 nova_compute[187439]: 2025-10-09 10:06:16.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:17.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:17.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:17.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:17.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v999: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 0 B/s wr, 8 op/s
Oct  9 10:06:17 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:17 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:17 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:17.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:18.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:18.938Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:18.947Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:18.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:18.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1000: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Oct  9 10:06:19 compute-0 nova_compute[187439]: 2025-10-09 10:06:19.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:06:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:06:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:06:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:06:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:06:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:06:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:06:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:06:19 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:19 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:19 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:19.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:20.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:20 compute-0 podman[214270]: 2025-10-09 10:06:20.618682833 +0000 UTC m=+0.061857763 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller)
Oct  9 10:06:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1001: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct  9 10:06:21 compute-0 systemd[1]: Stopping User Manager for UID 1000...
Oct  9 10:06:21 compute-0 systemd[204665]: Activating special unit Exit the Session...
Oct  9 10:06:21 compute-0 systemd[204665]: Stopped target Main User Target.
Oct  9 10:06:21 compute-0 systemd[204665]: Stopped target Basic System.
Oct  9 10:06:21 compute-0 systemd[204665]: Stopped target Paths.
Oct  9 10:06:21 compute-0 systemd[204665]: Stopped target Sockets.
Oct  9 10:06:21 compute-0 systemd[204665]: Stopped target Timers.
Oct  9 10:06:21 compute-0 systemd[204665]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:06:21 compute-0 systemd[204665]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 10:06:21 compute-0 systemd[204665]: Closed D-Bus User Message Bus Socket.
Oct  9 10:06:21 compute-0 systemd[204665]: Stopped Create User's Volatile Files and Directories.
Oct  9 10:06:21 compute-0 systemd[204665]: Removed slice User Application Slice.
Oct  9 10:06:21 compute-0 systemd[204665]: Reached target Shutdown.
Oct  9 10:06:21 compute-0 systemd[204665]: Finished Exit the Session.
Oct  9 10:06:21 compute-0 systemd[204665]: Reached target Exit the Session.
Oct  9 10:06:21 compute-0 systemd[1]: user@1000.service: Deactivated successfully.
Oct  9 10:06:21 compute-0 systemd[1]: Stopped User Manager for UID 1000.
Oct  9 10:06:21 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct  9 10:06:21 compute-0 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct  9 10:06:21 compute-0 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct  9 10:06:21 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct  9 10:06:21 compute-0 systemd[1]: Removed slice User Slice of UID 1000.
Oct  9 10:06:21 compute-0 systemd[1]: user-1000.slice: Consumed 2min 12.203s CPU time, 734.3M memory peak, read 230.2M from disk, written 229.2M to disk.
Oct  9 10:06:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:21 compute-0 nova_compute[187439]: 2025-10-09 10:06:21.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:21 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:21 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:21 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:22] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:06:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:22] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:06:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:22.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1002: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct  9 10:06:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:23.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:23.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:23.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:23.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:23 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:23 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:23 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:23.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:24 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:24.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:24 compute-0 nova_compute[187439]: 2025-10-09 10:06:24.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1003: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct  9 10:06:25 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:25 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:25 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:25.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:26.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:26 compute-0 nova_compute[187439]: 2025-10-09 10:06:26.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:27.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:27.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:27.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:27.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1004: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct  9 10:06:27 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:27 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000022s ======
Oct  9 10:06:27 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:27.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000022s
Oct  9 10:06:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:28.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:28 compute-0 podman[214303]: 2025-10-09 10:06:28.599643046 +0000 UTC m=+0.043411870 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  9 10:06:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:28.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:29.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:29.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:29.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1005: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Oct  9 10:06:29 compute-0 nova_compute[187439]: 2025-10-09 10:06:29.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:29 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:29 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:29 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:29.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:30.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1006: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 0 B/s wr, 130 op/s
Oct  9 10:06:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:31 compute-0 nova_compute[187439]: 2025-10-09 10:06:31.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:31 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:31 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:31 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:31.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:31 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:32] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:06:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:32] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:06:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:32.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1007: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:06:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:33.578Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:33.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:33.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:33.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:33 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:33 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:33 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:33.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:34.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:34 compute-0 nova_compute[187439]: 2025-10-09 10:06:34.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:06:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:06:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1008: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:06:35 compute-0 podman[214351]: 2025-10-09 10:06:35.623916214 +0000 UTC m=+0.066129437 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct  9 10:06:35 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:35 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:35 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:35.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:36 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:36 compute-0 nova_compute[187439]: 2025-10-09 10:06:36.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:37.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:37.120Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:37.120Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:37.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1009: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:06:37 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:37 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:37 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:37.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:38.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:38.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:38.946Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:38.947Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:38.947Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1010: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:06:39 compute-0 nova_compute[187439]: 2025-10-09 10:06:39.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:39 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:39 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:39 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:39.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:40.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:41 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:41 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1011: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:06:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:41 compute-0 podman[214374]: 2025-10-09 10:06:41.607865913 +0000 UTC m=+0.048359550 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:06:41 compute-0 nova_compute[187439]: 2025-10-09 10:06:41.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:41 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:41 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:41 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:41.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:42] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:06:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:42] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:06:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:42.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:06:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:43.578Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:43.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:43.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:43.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:43 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:43 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:43 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:43.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:44.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:44 compute-0 nova_compute[187439]: 2025-10-09 10:06:44.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:06:45 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:45 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:45 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:45.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:45 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:46 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:46.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:46 compute-0 nova_compute[187439]: 2025-10-09 10:06:46.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:47.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:47.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:47.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:47.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:06:47 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:47 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:47 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:47.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:48.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:48.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": context deadline exceeded"
Oct  9 10:06:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:48.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:48.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:48.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:06:49
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', '.nfs', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', 'default.rgw.control']
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:06:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:06:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:06:49 compute-0 nova_compute[187439]: 2025-10-09 10:06:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:06:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:06:49 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:49 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:49 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:49.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:50 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:51 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:06:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:51 compute-0 ceph-mgr[4772]: [devicehealth INFO root] Check health
Oct  9 10:06:51 compute-0 podman[214401]: 2025-10-09 10:06:51.609649351 +0000 UTC m=+0.052284973 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  9 10:06:51 compute-0 nova_compute[187439]: 2025-10-09 10:06:51.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:51 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:51 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:51 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:52] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Oct  9 10:06:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:06:52] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Oct  9 10:06:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:52.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:06:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:06:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:06:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:06:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:06:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:53.579Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:53.591Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:53.591Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:53.592Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:06:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:53 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:06:53 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:53 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:53 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:53 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:53.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:06:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:06:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:06:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:06:54 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:54.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:54 compute-0 nova_compute[187439]: 2025-10-09 10:06:54.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:06:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:06:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:06:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:06:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:06:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:06:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:06:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:06:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:06:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 10:06:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:06:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:06:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:06:55 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:06:55 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:06:55 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:06:55 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:55 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:55 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:55.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:56 compute-0 podman[214679]: 2025-10-09 10:06:56.1389827 +0000 UTC m=+0.029901885 container create b088f4c2226400b1497e1d6830261a369594967c1132f52680e9a84c82e823d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_babbage, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:06:56 compute-0 systemd[1]: Started libpod-conmon-b088f4c2226400b1497e1d6830261a369594967c1132f52680e9a84c82e823d7.scope.
Oct  9 10:06:56 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:06:56 compute-0 podman[214679]: 2025-10-09 10:06:56.203008851 +0000 UTC m=+0.093928046 container init b088f4c2226400b1497e1d6830261a369594967c1132f52680e9a84c82e823d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 10:06:56 compute-0 podman[214679]: 2025-10-09 10:06:56.208517769 +0000 UTC m=+0.099436955 container start b088f4c2226400b1497e1d6830261a369594967c1132f52680e9a84c82e823d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_babbage, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:06:56 compute-0 podman[214679]: 2025-10-09 10:06:56.209751496 +0000 UTC m=+0.100670701 container attach b088f4c2226400b1497e1d6830261a369594967c1132f52680e9a84c82e823d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 10:06:56 compute-0 infallible_babbage[214692]: 167 167
Oct  9 10:06:56 compute-0 systemd[1]: libpod-b088f4c2226400b1497e1d6830261a369594967c1132f52680e9a84c82e823d7.scope: Deactivated successfully.
Oct  9 10:06:56 compute-0 conmon[214692]: conmon b088f4c2226400b1497e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b088f4c2226400b1497e1d6830261a369594967c1132f52680e9a84c82e823d7.scope/container/memory.events
Oct  9 10:06:56 compute-0 podman[214679]: 2025-10-09 10:06:56.213348558 +0000 UTC m=+0.104267744 container died b088f4c2226400b1497e1d6830261a369594967c1132f52680e9a84c82e823d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Oct  9 10:06:56 compute-0 podman[214679]: 2025-10-09 10:06:56.127173933 +0000 UTC m=+0.018093138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-27e15b5a4d3b40ef67de63c11c2506064b3067d8670f5c7d1864589b44467598-merged.mount: Deactivated successfully.
Oct  9 10:06:56 compute-0 podman[214679]: 2025-10-09 10:06:56.23110001 +0000 UTC m=+0.122019196 container remove b088f4c2226400b1497e1d6830261a369594967c1132f52680e9a84c82e823d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:06:56 compute-0 systemd[1]: libpod-conmon-b088f4c2226400b1497e1d6830261a369594967c1132f52680e9a84c82e823d7.scope: Deactivated successfully.
Oct  9 10:06:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:06:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:06:56 compute-0 podman[214714]: 2025-10-09 10:06:56.357101505 +0000 UTC m=+0.028695611 container create d575a9a7850d944545f1d605596ba220117cde4591ab3d5cc3a2939ad74fb090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 10:06:56 compute-0 systemd[1]: Started libpod-conmon-d575a9a7850d944545f1d605596ba220117cde4591ab3d5cc3a2939ad74fb090.scope.
Oct  9 10:06:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:56.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:56 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d30c9b4f93a8e65a53026b670b8e8dc9da620181fc63b3b4a7ec447fd03a05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d30c9b4f93a8e65a53026b670b8e8dc9da620181fc63b3b4a7ec447fd03a05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d30c9b4f93a8e65a53026b670b8e8dc9da620181fc63b3b4a7ec447fd03a05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d30c9b4f93a8e65a53026b670b8e8dc9da620181fc63b3b4a7ec447fd03a05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d30c9b4f93a8e65a53026b670b8e8dc9da620181fc63b3b4a7ec447fd03a05/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:56 compute-0 podman[214714]: 2025-10-09 10:06:56.416315061 +0000 UTC m=+0.087909198 container init d575a9a7850d944545f1d605596ba220117cde4591ab3d5cc3a2939ad74fb090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cori, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 10:06:56 compute-0 podman[214714]: 2025-10-09 10:06:56.422197593 +0000 UTC m=+0.093791709 container start d575a9a7850d944545f1d605596ba220117cde4591ab3d5cc3a2939ad74fb090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cori, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:06:56 compute-0 podman[214714]: 2025-10-09 10:06:56.423369312 +0000 UTC m=+0.094963429 container attach d575a9a7850d944545f1d605596ba220117cde4591ab3d5cc3a2939ad74fb090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 10:06:56 compute-0 podman[214714]: 2025-10-09 10:06:56.346549568 +0000 UTC m=+0.018143694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:06:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:56 compute-0 keen_cori[214727]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:06:56 compute-0 keen_cori[214727]: --> All data devices are unavailable
Oct  9 10:06:56 compute-0 systemd[1]: libpod-d575a9a7850d944545f1d605596ba220117cde4591ab3d5cc3a2939ad74fb090.scope: Deactivated successfully.
Oct  9 10:06:56 compute-0 podman[214714]: 2025-10-09 10:06:56.686695342 +0000 UTC m=+0.358289458 container died d575a9a7850d944545f1d605596ba220117cde4591ab3d5cc3a2939ad74fb090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-16d30c9b4f93a8e65a53026b670b8e8dc9da620181fc63b3b4a7ec447fd03a05-merged.mount: Deactivated successfully.
Oct  9 10:06:56 compute-0 podman[214714]: 2025-10-09 10:06:56.707786854 +0000 UTC m=+0.379380970 container remove d575a9a7850d944545f1d605596ba220117cde4591ab3d5cc3a2939ad74fb090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cori, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  9 10:06:56 compute-0 systemd[1]: libpod-conmon-d575a9a7850d944545f1d605596ba220117cde4591ab3d5cc3a2939ad74fb090.scope: Deactivated successfully.
Oct  9 10:06:56 compute-0 nova_compute[187439]: 2025-10-09 10:06:56.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:57.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:57.116Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:57.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:57.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:57 compute-0 podman[214833]: 2025-10-09 10:06:57.116544144 +0000 UTC m=+0.029522178 container create 547bdaee41aa6cae9ff21852974741ef579b824adf78ac36782b69981203b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_hodgkin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 10:06:57 compute-0 systemd[1]: Started libpod-conmon-547bdaee41aa6cae9ff21852974741ef579b824adf78ac36782b69981203b22a.scope.
Oct  9 10:06:57 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:06:57 compute-0 podman[214833]: 2025-10-09 10:06:57.176502575 +0000 UTC m=+0.089480619 container init 547bdaee41aa6cae9ff21852974741ef579b824adf78ac36782b69981203b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 10:06:57 compute-0 podman[214833]: 2025-10-09 10:06:57.180845605 +0000 UTC m=+0.093823629 container start 547bdaee41aa6cae9ff21852974741ef579b824adf78ac36782b69981203b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_hodgkin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 10:06:57 compute-0 podman[214833]: 2025-10-09 10:06:57.181941691 +0000 UTC m=+0.094919735 container attach 547bdaee41aa6cae9ff21852974741ef579b824adf78ac36782b69981203b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_hodgkin, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:06:57 compute-0 kind_hodgkin[214846]: 167 167
Oct  9 10:06:57 compute-0 systemd[1]: libpod-547bdaee41aa6cae9ff21852974741ef579b824adf78ac36782b69981203b22a.scope: Deactivated successfully.
Oct  9 10:06:57 compute-0 conmon[214846]: conmon 547bdaee41aa6cae9ff2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-547bdaee41aa6cae9ff21852974741ef579b824adf78ac36782b69981203b22a.scope/container/memory.events
Oct  9 10:06:57 compute-0 podman[214833]: 2025-10-09 10:06:57.185281269 +0000 UTC m=+0.098259293 container died 547bdaee41aa6cae9ff21852974741ef579b824adf78ac36782b69981203b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_hodgkin, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f15390030de944b3219d5c38cf8e2f45423d4733327cea24593114e7ae41919-merged.mount: Deactivated successfully.
Oct  9 10:06:57 compute-0 podman[214833]: 2025-10-09 10:06:57.103940237 +0000 UTC m=+0.016918282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:06:57 compute-0 podman[214833]: 2025-10-09 10:06:57.20563039 +0000 UTC m=+0.118608414 container remove 547bdaee41aa6cae9ff21852974741ef579b824adf78ac36782b69981203b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  9 10:06:57 compute-0 systemd[1]: libpod-conmon-547bdaee41aa6cae9ff21852974741ef579b824adf78ac36782b69981203b22a.scope: Deactivated successfully.
Oct  9 10:06:57 compute-0 podman[214866]: 2025-10-09 10:06:57.335746302 +0000 UTC m=+0.030015249 container create 14ec62d877b3cd5d220d6da7d03b0f1e30846a5385d0ba91b681bd7a06437a38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 10:06:57 compute-0 systemd[1]: Started libpod-conmon-14ec62d877b3cd5d220d6da7d03b0f1e30846a5385d0ba91b681bd7a06437a38.scope.
Oct  9 10:06:57 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c522fd24ed3c5a1f1c341d51eb2a5f0d4ba2b8afd7d63760bd5a460de28f37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c522fd24ed3c5a1f1c341d51eb2a5f0d4ba2b8afd7d63760bd5a460de28f37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c522fd24ed3c5a1f1c341d51eb2a5f0d4ba2b8afd7d63760bd5a460de28f37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c522fd24ed3c5a1f1c341d51eb2a5f0d4ba2b8afd7d63760bd5a460de28f37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:57 compute-0 podman[214866]: 2025-10-09 10:06:57.394485052 +0000 UTC m=+0.088753999 container init 14ec62d877b3cd5d220d6da7d03b0f1e30846a5385d0ba91b681bd7a06437a38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_merkle, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:06:57 compute-0 podman[214866]: 2025-10-09 10:06:57.399090216 +0000 UTC m=+0.093359163 container start 14ec62d877b3cd5d220d6da7d03b0f1e30846a5385d0ba91b681bd7a06437a38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_merkle, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 10:06:57 compute-0 podman[214866]: 2025-10-09 10:06:57.40060901 +0000 UTC m=+0.094877957 container attach 14ec62d877b3cd5d220d6da7d03b0f1e30846a5385d0ba91b681bd7a06437a38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 10:06:57 compute-0 podman[214866]: 2025-10-09 10:06:57.322735118 +0000 UTC m=+0.017004075 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]: {
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:    "1": [
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:        {
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "devices": [
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "/dev/loop3"
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            ],
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "lv_name": "ceph_lv0",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "lv_size": "21470642176",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "name": "ceph_lv0",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "tags": {
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.cluster_name": "ceph",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.crush_device_class": "",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.encrypted": "0",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.osd_id": "1",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.type": "block",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.vdo": "0",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:                "ceph.with_tpm": "0"
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            },
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "type": "block",
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:            "vg_name": "ceph_vg0"
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:        }
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]:    ]
Oct  9 10:06:57 compute-0 peaceful_merkle[214879]: }
Oct  9 10:06:57 compute-0 podman[214866]: 2025-10-09 10:06:57.633000939 +0000 UTC m=+0.327269886 container died 14ec62d877b3cd5d220d6da7d03b0f1e30846a5385d0ba91b681bd7a06437a38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 10:06:57 compute-0 systemd[1]: libpod-14ec62d877b3cd5d220d6da7d03b0f1e30846a5385d0ba91b681bd7a06437a38.scope: Deactivated successfully.
Oct  9 10:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c522fd24ed3c5a1f1c341d51eb2a5f0d4ba2b8afd7d63760bd5a460de28f37-merged.mount: Deactivated successfully.
Oct  9 10:06:57 compute-0 podman[214866]: 2025-10-09 10:06:57.655088077 +0000 UTC m=+0.349357024 container remove 14ec62d877b3cd5d220d6da7d03b0f1e30846a5385d0ba91b681bd7a06437a38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_merkle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 10:06:57 compute-0 systemd[1]: libpod-conmon-14ec62d877b3cd5d220d6da7d03b0f1e30846a5385d0ba91b681bd7a06437a38.scope: Deactivated successfully.
Oct  9 10:06:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:06:57 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:57 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:57 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:57.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:58 compute-0 podman[214978]: 2025-10-09 10:06:58.079876689 +0000 UTC m=+0.030144964 container create 8e728b026cc0d1b5f8013d3b973806dff408d219885b162b1caebcb94494714f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mclaren, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:06:58 compute-0 systemd[1]: Started libpod-conmon-8e728b026cc0d1b5f8013d3b973806dff408d219885b162b1caebcb94494714f.scope.
Oct  9 10:06:58 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:06:58 compute-0 podman[214978]: 2025-10-09 10:06:58.137248331 +0000 UTC m=+0.087516596 container init 8e728b026cc0d1b5f8013d3b973806dff408d219885b162b1caebcb94494714f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 10:06:58 compute-0 podman[214978]: 2025-10-09 10:06:58.141460794 +0000 UTC m=+0.091729069 container start 8e728b026cc0d1b5f8013d3b973806dff408d219885b162b1caebcb94494714f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mclaren, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:06:58 compute-0 podman[214978]: 2025-10-09 10:06:58.142624278 +0000 UTC m=+0.092892552 container attach 8e728b026cc0d1b5f8013d3b973806dff408d219885b162b1caebcb94494714f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:06:58 compute-0 modest_mclaren[214992]: 167 167
Oct  9 10:06:58 compute-0 systemd[1]: libpod-8e728b026cc0d1b5f8013d3b973806dff408d219885b162b1caebcb94494714f.scope: Deactivated successfully.
Oct  9 10:06:58 compute-0 conmon[214992]: conmon 8e728b026cc0d1b5f801 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e728b026cc0d1b5f8013d3b973806dff408d219885b162b1caebcb94494714f.scope/container/memory.events
Oct  9 10:06:58 compute-0 podman[214978]: 2025-10-09 10:06:58.145247585 +0000 UTC m=+0.095515859 container died 8e728b026cc0d1b5f8013d3b973806dff408d219885b162b1caebcb94494714f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:06:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-001b8d11a79bb53fc25f348e6d8c8f19723b8e1bb0759fe6c119ed412d65af35-merged.mount: Deactivated successfully.
Oct  9 10:06:58 compute-0 podman[214978]: 2025-10-09 10:06:58.066992043 +0000 UTC m=+0.017260338 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:06:58 compute-0 podman[214978]: 2025-10-09 10:06:58.163648142 +0000 UTC m=+0.113916417 container remove 8e728b026cc0d1b5f8013d3b973806dff408d219885b162b1caebcb94494714f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mclaren, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:06:58 compute-0 systemd[1]: libpod-conmon-8e728b026cc0d1b5f8013d3b973806dff408d219885b162b1caebcb94494714f.scope: Deactivated successfully.
Oct  9 10:06:58 compute-0 nova_compute[187439]: 2025-10-09 10:06:58.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:58 compute-0 podman[215014]: 2025-10-09 10:06:58.283412495 +0000 UTC m=+0.027516127 container create 2e2630cf27bc99836317334f201aa35cfbff86fe979dc47d91e3a87d33ee9577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 10:06:58 compute-0 systemd[1]: Started libpod-conmon-2e2630cf27bc99836317334f201aa35cfbff86fe979dc47d91e3a87d33ee9577.scope.
Oct  9 10:06:58 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e20f3ef07c3f2d2b291ebb469d7671a28aa80b14785685b9f7fff170809b3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e20f3ef07c3f2d2b291ebb469d7671a28aa80b14785685b9f7fff170809b3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e20f3ef07c3f2d2b291ebb469d7671a28aa80b14785685b9f7fff170809b3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35e20f3ef07c3f2d2b291ebb469d7671a28aa80b14785685b9f7fff170809b3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:58 compute-0 podman[215014]: 2025-10-09 10:06:58.335032922 +0000 UTC m=+0.079136554 container init 2e2630cf27bc99836317334f201aa35cfbff86fe979dc47d91e3a87d33ee9577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  9 10:06:58 compute-0 podman[215014]: 2025-10-09 10:06:58.339858982 +0000 UTC m=+0.083962615 container start 2e2630cf27bc99836317334f201aa35cfbff86fe979dc47d91e3a87d33ee9577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct  9 10:06:58 compute-0 podman[215014]: 2025-10-09 10:06:58.341021805 +0000 UTC m=+0.085125437 container attach 2e2630cf27bc99836317334f201aa35cfbff86fe979dc47d91e3a87d33ee9577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 10:06:58 compute-0 podman[215014]: 2025-10-09 10:06:58.272354653 +0000 UTC m=+0.016458295 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:06:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:58.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:58 compute-0 nice_ganguly[215027]: {}
Oct  9 10:06:58 compute-0 lvm[215111]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:06:58 compute-0 lvm[215111]: VG ceph_vg0 finished
Oct  9 10:06:58 compute-0 systemd[1]: libpod-2e2630cf27bc99836317334f201aa35cfbff86fe979dc47d91e3a87d33ee9577.scope: Deactivated successfully.
Oct  9 10:06:58 compute-0 podman[215014]: 2025-10-09 10:06:58.817692919 +0000 UTC m=+0.561796551 container died 2e2630cf27bc99836317334f201aa35cfbff86fe979dc47d91e3a87d33ee9577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:06:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-35e20f3ef07c3f2d2b291ebb469d7671a28aa80b14785685b9f7fff170809b3d-merged.mount: Deactivated successfully.
Oct  9 10:06:58 compute-0 podman[215102]: 2025-10-09 10:06:58.840806753 +0000 UTC m=+0.052607320 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 10:06:58 compute-0 podman[215014]: 2025-10-09 10:06:58.847684481 +0000 UTC m=+0.591788113 container remove 2e2630cf27bc99836317334f201aa35cfbff86fe979dc47d91e3a87d33ee9577 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:06:58 compute-0 systemd[1]: libpod-conmon-2e2630cf27bc99836317334f201aa35cfbff86fe979dc47d91e3a87d33ee9577.scope: Deactivated successfully.
Oct  9 10:06:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:06:58 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:58 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:06:58 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:58.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:58.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:58.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:06:58.948Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.264 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.265 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.265 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:06:59 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:59 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:06:59 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836267413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.625 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:06:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.821 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.823 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4590MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.823 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.823 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.871 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.871 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:06:59 compute-0 nova_compute[187439]: 2025-10-09 10:06:59.886 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:06:59 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:06:59 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:59 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:59.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:06:59 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:00 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:07:00 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231242342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:07:00 compute-0 nova_compute[187439]: 2025-10-09 10:07:00.236 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:07:00 compute-0 nova_compute[187439]: 2025-10-09 10:07:00.240 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:07:00 compute-0 nova_compute[187439]: 2025-10-09 10:07:00.250 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:07:00 compute-0 nova_compute[187439]: 2025-10-09 10:07:00.251 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:07:00 compute-0 nova_compute[187439]: 2025-10-09 10:07:00.252 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:07:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:00.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:01 compute-0 nova_compute[187439]: 2025-10-09 10:07:01.251 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:01 compute-0 nova_compute[187439]: 2025-10-09 10:07:01.267 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.506325) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421506386, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2125, "num_deletes": 259, "total_data_size": 3850155, "memory_usage": 3897640, "flush_reason": "Manual Compaction"}
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421514161, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3653044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26793, "largest_seqno": 28917, "table_properties": {"data_size": 3642593, "index_size": 6305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 26278, "raw_average_key_size": 21, "raw_value_size": 3620093, "raw_average_value_size": 3006, "num_data_blocks": 273, "num_entries": 1204, "num_filter_entries": 1204, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004276, "oldest_key_time": 1760004276, "file_creation_time": 1760004421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 7853 microseconds, and 6140 cpu microseconds.
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.514190) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3653044 bytes OK
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.514202) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.515008) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.515018) EVENT_LOG_v1 {"time_micros": 1760004421515015, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.515030) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3840193, prev total WAL file size 3840193, number of live WAL files 2.
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.515672) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3567KB)], [59(13MB)]
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421515707, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17779830, "oldest_snapshot_seqno": -1}
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6466 keys, 17623653 bytes, temperature: kUnknown
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421549325, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 17623653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17576933, "index_size": 29458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164614, "raw_average_key_size": 25, "raw_value_size": 17456893, "raw_average_value_size": 2699, "num_data_blocks": 1206, "num_entries": 6466, "num_filter_entries": 6466, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760004421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.549573) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17623653 bytes
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.561499) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 526.6 rd, 522.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 13.5 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(9.7) write-amplify(4.8) OK, records in: 7002, records dropped: 536 output_compression: NoCompression
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.561517) EVENT_LOG_v1 {"time_micros": 1760004421561509, "job": 32, "event": "compaction_finished", "compaction_time_micros": 33764, "compaction_time_cpu_micros": 25131, "output_level": 6, "num_output_files": 1, "total_output_size": 17623653, "num_input_records": 7002, "num_output_records": 6466, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421562604, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421564666, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.515607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.564773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.564776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.564779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.564780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:01.564781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:07:01 compute-0 nova_compute[187439]: 2025-10-09 10:07:01.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:01 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:01 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:01 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:01.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:02 compute-0 nova_compute[187439]: 2025-10-09 10:07:02.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:02 compute-0 nova_compute[187439]: 2025-10-09 10:07:02.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:02] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Oct  9 10:07:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:02] "GET /metrics HTTP/1.1" 200 48530 "" "Prometheus/2.51.0"
Oct  9 10:07:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:02.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:03.580Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:03.593Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:03.594Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:03.594Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:07:03 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:03 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:03 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:03.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:04 compute-0 nova_compute[187439]: 2025-10-09 10:07:04.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:04 compute-0 nova_compute[187439]: 2025-10-09 10:07:04.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:04.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:07:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:07:04 compute-0 nova_compute[187439]: 2025-10-09 10:07:04.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:04 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:05 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:05 compute-0 nova_compute[187439]: 2025-10-09 10:07:05.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:05 compute-0 nova_compute[187439]: 2025-10-09 10:07:05.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:07:05 compute-0 nova_compute[187439]: 2025-10-09 10:07:05.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:07:05 compute-0 nova_compute[187439]: 2025-10-09 10:07:05.257 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:07:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2 op/s
Oct  9 10:07:05 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:05 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:05 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:05.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:06 compute-0 nova_compute[187439]: 2025-10-09 10:07:06.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:06 compute-0 nova_compute[187439]: 2025-10-09 10:07:06.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:07:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:06.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:06 compute-0 podman[215210]: 2025-10-09 10:07:06.59569395 +0000 UTC m=+0.038730975 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:07:06 compute-0 nova_compute[187439]: 2025-10-09 10:07:06.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:07.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:07.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:07.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:07.117Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:07 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:08.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:08.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:08.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:08.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:08.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:08.983Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:09 compute-0 nova_compute[187439]: 2025-10-09 10:07:09.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:09 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:09 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:10 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:10.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:07:10.120 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:07:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:07:10.122 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:07:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:07:10.122 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:07:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:10.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1 op/s
Oct  9 10:07:11 compute-0 nova_compute[187439]: 2025-10-09 10:07:11.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:12.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:12] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:07:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:12] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:07:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:12.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:12 compute-0 podman[215233]: 2025-10-09 10:07:12.600695695 +0000 UTC m=+0.045215411 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:07:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:13.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:13.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:13.587Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:13.588Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:14.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:14.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:14 compute-0 nova_compute[187439]: 2025-10-09 10:07:14.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:14 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:15 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:16.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:16.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:16 compute-0 nova_compute[187439]: 2025-10-09 10:07:16.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:17.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:17.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:17.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:17.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:18.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:18.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:18.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:18.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:18.950Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:18.950Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:07:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:07:19 compute-0 nova_compute[187439]: 2025-10-09 10:07:19.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:07:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:07:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:07:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:07:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:07:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:07:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:19 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:20 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:20.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:20.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:21 compute-0 nova_compute[187439]: 2025-10-09 10:07:21.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:22.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:22] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:07:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:22] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:07:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:22.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:22 compute-0 podman[215286]: 2025-10-09 10:07:22.613870404 +0000 UTC m=+0.056322806 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:07:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:23.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:23.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:23.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:23.589Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:24.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:24.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:24 compute-0 nova_compute[187439]: 2025-10-09 10:07:24.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:25 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:24 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:26.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:26.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:26 compute-0 nova_compute[187439]: 2025-10-09 10:07:26.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:27.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:27.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:27.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:27.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:28.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:28.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:28.942Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:28.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:28.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:28.950Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:29 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:29 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:29 compute-0 nova_compute[187439]: 2025-10-09 10:07:29.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:29 compute-0 podman[215314]: 2025-10-09 10:07:29.610770988 +0000 UTC m=+0.046874418 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  9 10:07:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:30.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:30.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:31 compute-0 nova_compute[187439]: 2025-10-09 10:07:31.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:32.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:32] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:07:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:32] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:07:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:32.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:33.582Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:33.590Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:33.590Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:33.591Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:34.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:34.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:07:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:07:34 compute-0 nova_compute[187439]: 2025-10-09 10:07:34.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:36.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=cleanup t=2025-10-09T10:07:36.390122312Z level=info msg="Completed cleanup jobs" duration=2.615822ms
Oct  9 10:07:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:36.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=plugins.update.checker t=2025-10-09T10:07:36.482558766Z level=info msg="Update check succeeded" duration=34.304938ms
Oct  9 10:07:36 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=grafana.update.checker t=2025-10-09T10:07:36.489712885Z level=info msg="Update check succeeded" duration=44.875449ms
Oct  9 10:07:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:36 compute-0 nova_compute[187439]: 2025-10-09 10:07:36.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:37.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:37.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:37.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:37.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:37 compute-0 podman[215364]: 2025-10-09 10:07:37.595900442 +0000 UTC m=+0.037930665 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:07:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:38.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:38.944Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:38.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:38.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:38.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:39 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:39 compute-0 nova_compute[187439]: 2025-10-09 10:07:39.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:40.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:40.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:41 compute-0 nova_compute[187439]: 2025-10-09 10:07:41.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:42] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:07:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:42] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:07:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:42.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:43.582Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:43 compute-0 podman[215386]: 2025-10-09 10:07:43.598816244 +0000 UTC m=+0.042158858 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:07:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:43.600Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:43.600Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:43.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1043: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:44.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:44.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:44 compute-0 nova_compute[187439]: 2025-10-09 10:07:44.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1044: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:46.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:46.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:46 compute-0 nova_compute[187439]: 2025-10-09 10:07:46.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:46 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:47.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:47.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:47.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:47.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1045: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:48.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:48.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:48.945Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:48.951181) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468951225, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 614, "num_deletes": 251, "total_data_size": 848004, "memory_usage": 859320, "flush_reason": "Manual Compaction"}
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct  9 10:07:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:48.952Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:48.953Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:48.953Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468954329, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 836219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28918, "largest_seqno": 29531, "table_properties": {"data_size": 832917, "index_size": 1210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7395, "raw_average_key_size": 19, "raw_value_size": 826465, "raw_average_value_size": 2124, "num_data_blocks": 55, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004422, "oldest_key_time": 1760004422, "file_creation_time": 1760004468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 3166 microseconds, and 2346 cpu microseconds.
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:48.954364) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 836219 bytes OK
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:48.954375) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:48.954992) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:48.955002) EVENT_LOG_v1 {"time_micros": 1760004468954999, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:48.955014) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 844763, prev total WAL file size 844763, number of live WAL files 2.
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:48.955355) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(816KB)], [62(16MB)]
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468955391, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 18459872, "oldest_snapshot_seqno": -1}
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6344 keys, 16354731 bytes, temperature: kUnknown
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468994633, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 16354731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16309810, "index_size": 27979, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 162731, "raw_average_key_size": 25, "raw_value_size": 16192891, "raw_average_value_size": 2552, "num_data_blocks": 1141, "num_entries": 6344, "num_filter_entries": 6344, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760004468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:48 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:48.994749) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 16354731 bytes
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:49.001333) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 470.3 rd, 416.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 16.8 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(41.6) write-amplify(19.6) OK, records in: 6855, records dropped: 511 output_compression: NoCompression
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:49.001345) EVENT_LOG_v1 {"time_micros": 1760004469001340, "job": 34, "event": "compaction_finished", "compaction_time_micros": 39254, "compaction_time_cpu_micros": 24431, "output_level": 6, "num_output_files": 1, "total_output_size": 16354731, "num_input_records": 6855, "num_output_records": 6344, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469001497, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469003418, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:48.955291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:07:49
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'backups', 'images', '.nfs', 'default.rgw.log']
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:07:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:07:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:07:49 compute-0 nova_compute[187439]: 2025-10-09 10:07:49.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1046: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:07:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:07:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:50.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:50.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1047: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:51 compute-0 nova_compute[187439]: 2025-10-09 10:07:51.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:51 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:52] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:07:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:07:52] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:07:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:52.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:53.584Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:53.593Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:53.593Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:53.593Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:53 compute-0 podman[215413]: 2025-10-09 10:07:53.611981343 +0000 UTC m=+0.055322540 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  9 10:07:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1048: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:54.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:54.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:54 compute-0 nova_compute[187439]: 2025-10-09 10:07:54.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1049: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:07:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:55 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:56 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:07:56 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:56.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:56.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:56 compute-0 nova_compute[187439]: 2025-10-09 10:07:56.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:57.114Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:57.124Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:57.124Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:57.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1050: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:07:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:58.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:07:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:58.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:58.945Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:58.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:58.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:07:58.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:07:59 compute-0 nova_compute[187439]: 2025-10-09 10:07:59.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:07:59 compute-0 nova_compute[187439]: 2025-10-09 10:07:59.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:07:59 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:07:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:07:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:07:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1051: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 10:07:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:07:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:07:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 10:07:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:07:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:07:59 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:07:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:07:59 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:07:59 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:07:59 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:07:59 compute-0 podman[215570]: 2025-10-09 10:07:59.844834023 +0000 UTC m=+0.049421503 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:08:00 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:08:00 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:00 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:00 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:08:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:00.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:00 compute-0 podman[215645]: 2025-10-09 10:08:00.18686442 +0000 UTC m=+0.042927618 container create a6d7d3b9e92fe81bf8b7a53d655f1378a1152520f9550dcd8ec311ada6f7b204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 10:08:00 compute-0 systemd[1]: Started libpod-conmon-a6d7d3b9e92fe81bf8b7a53d655f1378a1152520f9550dcd8ec311ada6f7b204.scope.
Oct  9 10:08:00 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:00 compute-0 podman[215645]: 2025-10-09 10:08:00.257701754 +0000 UTC m=+0.113764971 container init a6d7d3b9e92fe81bf8b7a53d655f1378a1152520f9550dcd8ec311ada6f7b204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 10:08:00 compute-0 podman[215645]: 2025-10-09 10:08:00.164064918 +0000 UTC m=+0.020128136 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:08:00 compute-0 podman[215645]: 2025-10-09 10:08:00.264208805 +0000 UTC m=+0.120272001 container start a6d7d3b9e92fe81bf8b7a53d655f1378a1152520f9550dcd8ec311ada6f7b204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_archimedes, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.264 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.265 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:08:00 compute-0 podman[215645]: 2025-10-09 10:08:00.26569712 +0000 UTC m=+0.121760317 container attach a6d7d3b9e92fe81bf8b7a53d655f1378a1152520f9550dcd8ec311ada6f7b204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_archimedes, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.265 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.266 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:08:00 compute-0 sweet_archimedes[215658]: 167 167
Oct  9 10:08:00 compute-0 systemd[1]: libpod-a6d7d3b9e92fe81bf8b7a53d655f1378a1152520f9550dcd8ec311ada6f7b204.scope: Deactivated successfully.
Oct  9 10:08:00 compute-0 podman[215663]: 2025-10-09 10:08:00.300254945 +0000 UTC m=+0.018737312 container died a6d7d3b9e92fe81bf8b7a53d655f1378a1152520f9550dcd8ec311ada6f7b204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_archimedes, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 10:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fe8c5004e1f911a7c05e4143ee08c30dab978edbf74deea4401e4589f9627e2-merged.mount: Deactivated successfully.
Oct  9 10:08:00 compute-0 podman[215663]: 2025-10-09 10:08:00.321114328 +0000 UTC m=+0.039596675 container remove a6d7d3b9e92fe81bf8b7a53d655f1378a1152520f9550dcd8ec311ada6f7b204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:08:00 compute-0 systemd[1]: libpod-conmon-a6d7d3b9e92fe81bf8b7a53d655f1378a1152520f9550dcd8ec311ada6f7b204.scope: Deactivated successfully.
Oct  9 10:08:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:00.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:00 compute-0 podman[215702]: 2025-10-09 10:08:00.480000471 +0000 UTC m=+0.035533614 container create 308ba5308bf8c3ecf395f3296ea23ea9d4123ad749c2181cea24d29bc14b89b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Oct  9 10:08:00 compute-0 systemd[1]: Started libpod-conmon-308ba5308bf8c3ecf395f3296ea23ea9d4123ad749c2181cea24d29bc14b89b5.scope.
Oct  9 10:08:00 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547d77ac40deab2cacb159fc975a6b41fbde85f7f6ded724c38e8ec9236c4ba0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547d77ac40deab2cacb159fc975a6b41fbde85f7f6ded724c38e8ec9236c4ba0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547d77ac40deab2cacb159fc975a6b41fbde85f7f6ded724c38e8ec9236c4ba0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547d77ac40deab2cacb159fc975a6b41fbde85f7f6ded724c38e8ec9236c4ba0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/547d77ac40deab2cacb159fc975a6b41fbde85f7f6ded724c38e8ec9236c4ba0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:00 compute-0 podman[215702]: 2025-10-09 10:08:00.561346842 +0000 UTC m=+0.116879986 container init 308ba5308bf8c3ecf395f3296ea23ea9d4123ad749c2181cea24d29bc14b89b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_fermi, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:08:00 compute-0 podman[215702]: 2025-10-09 10:08:00.467246502 +0000 UTC m=+0.022779656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:08:00 compute-0 podman[215702]: 2025-10-09 10:08:00.567010431 +0000 UTC m=+0.122543575 container start 308ba5308bf8c3ecf395f3296ea23ea9d4123ad749c2181cea24d29bc14b89b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_fermi, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 10:08:00 compute-0 podman[215702]: 2025-10-09 10:08:00.568373432 +0000 UTC m=+0.123906575 container attach 308ba5308bf8c3ecf395f3296ea23ea9d4123ad749c2181cea24d29bc14b89b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:08:00 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:08:00 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268930699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.619 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:08:00 compute-0 trusting_fermi[215716]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:08:00 compute-0 trusting_fermi[215716]: --> All data devices are unavailable
Oct  9 10:08:00 compute-0 systemd[1]: libpod-308ba5308bf8c3ecf395f3296ea23ea9d4123ad749c2181cea24d29bc14b89b5.scope: Deactivated successfully.
Oct  9 10:08:00 compute-0 podman[215702]: 2025-10-09 10:08:00.873978674 +0000 UTC m=+0.429511818 container died 308ba5308bf8c3ecf395f3296ea23ea9d4123ad749c2181cea24d29bc14b89b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_fermi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.879 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.880 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4576MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.880 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.880 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-547d77ac40deab2cacb159fc975a6b41fbde85f7f6ded724c38e8ec9236c4ba0-merged.mount: Deactivated successfully.
Oct  9 10:08:00 compute-0 podman[215702]: 2025-10-09 10:08:00.899709563 +0000 UTC m=+0.455242708 container remove 308ba5308bf8c3ecf395f3296ea23ea9d4123ad749c2181cea24d29bc14b89b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.926 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.926 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:08:00 compute-0 systemd[1]: libpod-conmon-308ba5308bf8c3ecf395f3296ea23ea9d4123ad749c2181cea24d29bc14b89b5.scope: Deactivated successfully.
Oct  9 10:08:00 compute-0 nova_compute[187439]: 2025-10-09 10:08:00.942 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:08:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:00 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:01 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:01 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:08:01 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/33496731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:08:01 compute-0 nova_compute[187439]: 2025-10-09 10:08:01.307 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:08:01 compute-0 nova_compute[187439]: 2025-10-09 10:08:01.311 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:08:01 compute-0 nova_compute[187439]: 2025-10-09 10:08:01.328 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:08:01 compute-0 nova_compute[187439]: 2025-10-09 10:08:01.329 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:08:01 compute-0 nova_compute[187439]: 2025-10-09 10:08:01.329 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:08:01 compute-0 podman[215847]: 2025-10-09 10:08:01.383545918 +0000 UTC m=+0.030442466 container create 43aa5d3e3beae19e94cda6ccfd72aa9a0eab98f52153499a17422f344011761e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mccarthy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:08:01 compute-0 systemd[1]: Started libpod-conmon-43aa5d3e3beae19e94cda6ccfd72aa9a0eab98f52153499a17422f344011761e.scope.
Oct  9 10:08:01 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:08:01 compute-0 podman[215847]: 2025-10-09 10:08:01.443511221 +0000 UTC m=+0.090407759 container init 43aa5d3e3beae19e94cda6ccfd72aa9a0eab98f52153499a17422f344011761e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct  9 10:08:01 compute-0 podman[215847]: 2025-10-09 10:08:01.449829935 +0000 UTC m=+0.096726473 container start 43aa5d3e3beae19e94cda6ccfd72aa9a0eab98f52153499a17422f344011761e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mccarthy, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:08:01 compute-0 podman[215847]: 2025-10-09 10:08:01.451093398 +0000 UTC m=+0.097989927 container attach 43aa5d3e3beae19e94cda6ccfd72aa9a0eab98f52153499a17422f344011761e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mccarthy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 10:08:01 compute-0 bold_mccarthy[215861]: 167 167
Oct  9 10:08:01 compute-0 systemd[1]: libpod-43aa5d3e3beae19e94cda6ccfd72aa9a0eab98f52153499a17422f344011761e.scope: Deactivated successfully.
Oct  9 10:08:01 compute-0 conmon[215861]: conmon 43aa5d3e3beae19e94cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43aa5d3e3beae19e94cda6ccfd72aa9a0eab98f52153499a17422f344011761e.scope/container/memory.events
Oct  9 10:08:01 compute-0 podman[215847]: 2025-10-09 10:08:01.455207286 +0000 UTC m=+0.102103844 container died 43aa5d3e3beae19e94cda6ccfd72aa9a0eab98f52153499a17422f344011761e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:08:01 compute-0 podman[215847]: 2025-10-09 10:08:01.371316207 +0000 UTC m=+0.018212765 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-79d75f8b9dcabd3bbcc669bb6b5adcb1b2a31be5648793fae0f8e4343e8a8d0f-merged.mount: Deactivated successfully.
Oct  9 10:08:01 compute-0 podman[215847]: 2025-10-09 10:08:01.473630305 +0000 UTC m=+0.120526843 container remove 43aa5d3e3beae19e94cda6ccfd72aa9a0eab98f52153499a17422f344011761e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mccarthy, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:08:01 compute-0 systemd[1]: libpod-conmon-43aa5d3e3beae19e94cda6ccfd72aa9a0eab98f52153499a17422f344011761e.scope: Deactivated successfully.
Oct  9 10:08:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:01 compute-0 podman[215883]: 2025-10-09 10:08:01.621361794 +0000 UTC m=+0.037030117 container create 89e06daca00b6928e086da1db4550f70a621d06db15e4b3da9a1d98aec516c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_almeida, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct  9 10:08:01 compute-0 systemd[1]: Started libpod-conmon-89e06daca00b6928e086da1db4550f70a621d06db15e4b3da9a1d98aec516c0a.scope.
Oct  9 10:08:01 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37beed02719b413c22d991ef10f368dc31eee3369640bd758c3fee9f96fffeb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37beed02719b413c22d991ef10f368dc31eee3369640bd758c3fee9f96fffeb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37beed02719b413c22d991ef10f368dc31eee3369640bd758c3fee9f96fffeb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37beed02719b413c22d991ef10f368dc31eee3369640bd758c3fee9f96fffeb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:01 compute-0 podman[215883]: 2025-10-09 10:08:01.687755229 +0000 UTC m=+0.103423552 container init 89e06daca00b6928e086da1db4550f70a621d06db15e4b3da9a1d98aec516c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 10:08:01 compute-0 podman[215883]: 2025-10-09 10:08:01.692965403 +0000 UTC m=+0.108633716 container start 89e06daca00b6928e086da1db4550f70a621d06db15e4b3da9a1d98aec516c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_almeida, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 10:08:01 compute-0 podman[215883]: 2025-10-09 10:08:01.694431187 +0000 UTC m=+0.110099500 container attach 89e06daca00b6928e086da1db4550f70a621d06db15e4b3da9a1d98aec516c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:08:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:08:01 compute-0 podman[215883]: 2025-10-09 10:08:01.60705686 +0000 UTC m=+0.022725173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:08:01 compute-0 cool_almeida[215896]: {
Oct  9 10:08:01 compute-0 cool_almeida[215896]:    "1": [
Oct  9 10:08:01 compute-0 cool_almeida[215896]:        {
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "devices": [
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "/dev/loop3"
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            ],
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "lv_name": "ceph_lv0",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "lv_size": "21470642176",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "name": "ceph_lv0",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "tags": {
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.cluster_name": "ceph",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.crush_device_class": "",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.encrypted": "0",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.osd_id": "1",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.type": "block",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.vdo": "0",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:                "ceph.with_tpm": "0"
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            },
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "type": "block",
Oct  9 10:08:01 compute-0 cool_almeida[215896]:            "vg_name": "ceph_vg0"
Oct  9 10:08:01 compute-0 cool_almeida[215896]:        }
Oct  9 10:08:01 compute-0 cool_almeida[215896]:    ]
Oct  9 10:08:01 compute-0 cool_almeida[215896]: }
Oct  9 10:08:01 compute-0 nova_compute[187439]: 2025-10-09 10:08:01.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:01 compute-0 systemd[1]: libpod-89e06daca00b6928e086da1db4550f70a621d06db15e4b3da9a1d98aec516c0a.scope: Deactivated successfully.
Oct  9 10:08:01 compute-0 podman[215883]: 2025-10-09 10:08:01.962795196 +0000 UTC m=+0.378463510 container died 89e06daca00b6928e086da1db4550f70a621d06db15e4b3da9a1d98aec516c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_almeida, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-37beed02719b413c22d991ef10f368dc31eee3369640bd758c3fee9f96fffeb3-merged.mount: Deactivated successfully.
Oct  9 10:08:01 compute-0 podman[215883]: 2025-10-09 10:08:01.986246448 +0000 UTC m=+0.401914761 container remove 89e06daca00b6928e086da1db4550f70a621d06db15e4b3da9a1d98aec516c0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_almeida, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:08:02 compute-0 systemd[1]: libpod-conmon-89e06daca00b6928e086da1db4550f70a621d06db15e4b3da9a1d98aec516c0a.scope: Deactivated successfully.
Oct  9 10:08:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:02.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:02] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:08:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:02] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Oct  9 10:08:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:02.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:02 compute-0 podman[215997]: 2025-10-09 10:08:02.498067591 +0000 UTC m=+0.032448778 container create 77f4c6a3ff9cbf9a2de03192b2b815d15b5ccdd9a47747aee83194d3246f8df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:08:02 compute-0 systemd[1]: Started libpod-conmon-77f4c6a3ff9cbf9a2de03192b2b815d15b5ccdd9a47747aee83194d3246f8df1.scope.
Oct  9 10:08:02 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:08:02 compute-0 podman[215997]: 2025-10-09 10:08:02.564132977 +0000 UTC m=+0.098514164 container init 77f4c6a3ff9cbf9a2de03192b2b815d15b5ccdd9a47747aee83194d3246f8df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_chaplygin, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:08:02 compute-0 podman[215997]: 2025-10-09 10:08:02.571672153 +0000 UTC m=+0.106053340 container start 77f4c6a3ff9cbf9a2de03192b2b815d15b5ccdd9a47747aee83194d3246f8df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_chaplygin, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:08:02 compute-0 podman[215997]: 2025-10-09 10:08:02.573963343 +0000 UTC m=+0.108344530 container attach 77f4c6a3ff9cbf9a2de03192b2b815d15b5ccdd9a47747aee83194d3246f8df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_chaplygin, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 10:08:02 compute-0 nervous_chaplygin[216011]: 167 167
Oct  9 10:08:02 compute-0 systemd[1]: libpod-77f4c6a3ff9cbf9a2de03192b2b815d15b5ccdd9a47747aee83194d3246f8df1.scope: Deactivated successfully.
Oct  9 10:08:02 compute-0 podman[215997]: 2025-10-09 10:08:02.576293818 +0000 UTC m=+0.110675004 container died 77f4c6a3ff9cbf9a2de03192b2b815d15b5ccdd9a47747aee83194d3246f8df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_chaplygin, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:08:02 compute-0 podman[215997]: 2025-10-09 10:08:02.48598115 +0000 UTC m=+0.020362357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-276b732859aa12dbaddafbab7684787896fb3f93f8246bf367f3d5a2036f79f5-merged.mount: Deactivated successfully.
Oct  9 10:08:02 compute-0 podman[215997]: 2025-10-09 10:08:02.594878261 +0000 UTC m=+0.129259448 container remove 77f4c6a3ff9cbf9a2de03192b2b815d15b5ccdd9a47747aee83194d3246f8df1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:08:02 compute-0 systemd[1]: libpod-conmon-77f4c6a3ff9cbf9a2de03192b2b815d15b5ccdd9a47747aee83194d3246f8df1.scope: Deactivated successfully.
Oct  9 10:08:02 compute-0 podman[216034]: 2025-10-09 10:08:02.73790021 +0000 UTC m=+0.030777146 container create 6f55ff0355dee662a110d60bee3aa1252672a1dcf490ef42a7a3c91291268889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_tharp, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 10:08:02 compute-0 systemd[1]: Started libpod-conmon-6f55ff0355dee662a110d60bee3aa1252672a1dcf490ef42a7a3c91291268889.scope.
Oct  9 10:08:02 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46b4eba98f583063c081bafa50844a8edd5515930fdba8fbabd29b34e4342a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46b4eba98f583063c081bafa50844a8edd5515930fdba8fbabd29b34e4342a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46b4eba98f583063c081bafa50844a8edd5515930fdba8fbabd29b34e4342a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46b4eba98f583063c081bafa50844a8edd5515930fdba8fbabd29b34e4342a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:08:02 compute-0 podman[216034]: 2025-10-09 10:08:02.801127184 +0000 UTC m=+0.094004140 container init 6f55ff0355dee662a110d60bee3aa1252672a1dcf490ef42a7a3c91291268889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_tharp, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 10:08:02 compute-0 podman[216034]: 2025-10-09 10:08:02.807784787 +0000 UTC m=+0.100661723 container start 6f55ff0355dee662a110d60bee3aa1252672a1dcf490ef42a7a3c91291268889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 10:08:02 compute-0 podman[216034]: 2025-10-09 10:08:02.810561033 +0000 UTC m=+0.103437989 container attach 6f55ff0355dee662a110d60bee3aa1252672a1dcf490ef42a7a3c91291268889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 10:08:02 compute-0 podman[216034]: 2025-10-09 10:08:02.725171559 +0000 UTC m=+0.018048515 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:08:03 compute-0 nova_compute[187439]: 2025-10-09 10:08:03.330 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:03 compute-0 jovial_tharp[216047]: {}
Oct  9 10:08:03 compute-0 lvm[216124]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:08:03 compute-0 lvm[216124]: VG ceph_vg0 finished
Oct  9 10:08:03 compute-0 systemd[1]: libpod-6f55ff0355dee662a110d60bee3aa1252672a1dcf490ef42a7a3c91291268889.scope: Deactivated successfully.
Oct  9 10:08:03 compute-0 podman[216034]: 2025-10-09 10:08:03.39337354 +0000 UTC m=+0.686250476 container died 6f55ff0355dee662a110d60bee3aa1252672a1dcf490ef42a7a3c91291268889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_tharp, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c46b4eba98f583063c081bafa50844a8edd5515930fdba8fbabd29b34e4342a1-merged.mount: Deactivated successfully.
Oct  9 10:08:03 compute-0 podman[216034]: 2025-10-09 10:08:03.419541664 +0000 UTC m=+0.712418601 container remove 6f55ff0355dee662a110d60bee3aa1252672a1dcf490ef42a7a3c91291268889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_tharp, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:08:03 compute-0 systemd[1]: libpod-conmon-6f55ff0355dee662a110d60bee3aa1252672a1dcf490ef42a7a3c91291268889.scope: Deactivated successfully.
Oct  9 10:08:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:08:03 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:08:03 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:03.584Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:03.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:03.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:03.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 10:08:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:04.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:04 compute-0 nova_compute[187439]: 2025-10-09 10:08:04.242 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:04 compute-0 nova_compute[187439]: 2025-10-09 10:08:04.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:04.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:04 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:04 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:08:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:08:04 compute-0 nova_compute[187439]: 2025-10-09 10:08:04.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:05 compute-0 nova_compute[187439]: 2025-10-09 10:08:05.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:05 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:08:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:05 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:06 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:06 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:06.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:06.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:06 compute-0 nova_compute[187439]: 2025-10-09 10:08:06.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:07.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:07.127Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:07.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:07.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:07 compute-0 nova_compute[187439]: 2025-10-09 10:08:07.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:07 compute-0 nova_compute[187439]: 2025-10-09 10:08:07.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:08:07 compute-0 nova_compute[187439]: 2025-10-09 10:08:07.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:08:07 compute-0 nova_compute[187439]: 2025-10-09 10:08:07.259 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:08:07 compute-0 nova_compute[187439]: 2025-10-09 10:08:07.259 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:07 compute-0 nova_compute[187439]: 2025-10-09 10:08:07.259 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:08:07 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 10:08:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:08.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:08.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:08 compute-0 podman[216169]: 2025-10-09 10:08:08.60465987 +0000 UTC m=+0.042675251 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Oct  9 10:08:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:08.946Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:08.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:08.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:08.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:09 compute-0 nova_compute[187439]: 2025-10-09 10:08:09.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:09 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct  9 10:08:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:10.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:08:10.119 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:08:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:08:10.119 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:08:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:08:10.120 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:08:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:10.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:10 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:11 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:11 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:08:11 compute-0 nova_compute[187439]: 2025-10-09 10:08:11.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:08:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1775733170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:08:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:08:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1775733170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:08:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:12.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:12] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:08:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:12] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:08:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:12.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:13.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:13.594Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:13.594Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:13.594Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:14.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:14 compute-0 podman[216213]: 2025-10-09 10:08:14.124723596 +0000 UTC m=+0.071947228 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  9 10:08:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:14.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:14 compute-0 nova_compute[187439]: 2025-10-09 10:08:14.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:08:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:15 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:16 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:16 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:16.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:16.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:16 compute-0 nova_compute[187439]: 2025-10-09 10:08:16.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:17.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:17.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:17.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:17.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:18.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:18.947Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:18.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:18.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:18.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:08:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:08:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:08:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:08:19 compute-0 nova_compute[187439]: 2025-10-09 10:08:19.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:08:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:08:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:08:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:08:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:20.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:20 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:21 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:21 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:08:21 compute-0 nova_compute[187439]: 2025-10-09 10:08:21.803 2 DEBUG oslo_concurrency.processutils [None req-06752881-e4c7-4336-b1c1-bcd187f39813 3a4ac457589b496085910d92d06034e7 a53d5690b6a54109990182326650a2b8 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:08:21 compute-0 nova_compute[187439]: 2025-10-09 10:08:21.817 2 DEBUG oslo_concurrency.processutils [None req-06752881-e4c7-4336-b1c1-bcd187f39813 3a4ac457589b496085910d92d06034e7 a53d5690b6a54109990182326650a2b8 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:08:21 compute-0 nova_compute[187439]: 2025-10-09 10:08:21.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:22.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:22] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:08:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:22] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:08:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:22.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:23.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:23.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:23.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:23.596Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:24.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:24.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:24 compute-0 nova_compute[187439]: 2025-10-09 10:08:24.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:24 compute-0 podman[216244]: 2025-10-09 10:08:24.632592763 +0000 UTC m=+0.066104891 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  9 10:08:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:08:25 compute-0 nova_compute[187439]: 2025-10-09 10:08:25.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:25 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:08:25.715 92053 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:08:25 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:08:25.716 92053 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:08:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:25 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:26 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:26 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:26.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:26 compute-0 nova_compute[187439]: 2025-10-09 10:08:26.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:27.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:27.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:27.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:27.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:28.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:28.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:28.947Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:28.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:28.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:28.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:29 compute-0 nova_compute[187439]: 2025-10-09 10:08:29.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:30.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:30.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:30 compute-0 podman[216273]: 2025-10-09 10:08:30.617734708 +0000 UTC m=+0.053340872 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:08:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:31 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:30 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:08:31 compute-0 nova_compute[187439]: 2025-10-09 10:08:31.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:32.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:32] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:08:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:32] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:08:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:32.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:32 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:08:32.718 92053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ef217152-08e8-40c8-a663-3565c5b77d4a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:08:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:33.587Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:33.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:33.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:33.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:34.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:34 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-grafana-compute-0[33936]: logger=infra.usagestats t=2025-10-09T10:08:34.406894462Z level=info msg="Usage stats are ready to report"
Oct  9 10:08:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:34.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:08:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:08:34 compute-0 nova_compute[187439]: 2025-10-09 10:08:34.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:34 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:35 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:35 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:08:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:36.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:36.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:36 compute-0 nova_compute[187439]: 2025-10-09 10:08:36.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:37.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:37.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:37.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:37.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:08:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:38.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:38.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:38.948Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:38.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:38.960Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:38.960Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:39 compute-0 podman[216324]: 2025-10-09 10:08:39.635281677 +0000 UTC m=+0.072608074 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:08:39 compute-0 nova_compute[187439]: 2025-10-09 10:08:39.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:08:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:40 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:39 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:40.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:08:41 compute-0 nova_compute[187439]: 2025-10-09 10:08:41.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:42.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:42] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:08:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:42] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:08:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:42.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:43.587Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:43.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:43.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:43.595Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:08:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:44 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:44 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:44.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:44.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:44 compute-0 podman[216346]: 2025-10-09 10:08:44.625956141 +0000 UTC m=+0.055492279 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:08:44 compute-0 nova_compute[187439]: 2025-10-09 10:08:44.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:08:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:46.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:46.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:46 compute-0 nova_compute[187439]: 2025-10-09 10:08:46.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:47.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:47.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:47.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:47.131Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:48.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:48.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:48.949Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:48.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:48.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:48.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:49 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:49 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:08:49
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', '.nfs', '.rgw.root', 'images', '.mgr', 'cephfs.cephfs.data']
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:08:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:08:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:08:49 compute-0 nova_compute[187439]: 2025-10-09 10:08:49.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:08:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:08:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:50.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:50.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:51 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:08:51 compute-0 nova_compute[187439]: 2025-10-09 10:08:51.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:52.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:52] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:08:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:08:52] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:08:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:52.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:53.588Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:53.600Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:53.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:53.601Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:53 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:54 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:54 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:54.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:54.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:54 compute-0 nova_compute[187439]: 2025-10-09 10:08:54.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:55 compute-0 podman[216398]: 2025-10-09 10:08:55.632823471 +0000 UTC m=+0.064089722 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller)
Oct  9 10:08:55 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:08:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:56.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:56 compute-0 nova_compute[187439]: 2025-10-09 10:08:56.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:57.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:57.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:57.133Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:57.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:57 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:08:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:58.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:08:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:58.949Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:58.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:58.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:08:58.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:08:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:59 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:08:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:08:59 compute-0 nova_compute[187439]: 2025-10-09 10:08:59.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:59 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:00.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:00.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:01 compute-0 nova_compute[187439]: 2025-10-09 10:09:01.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:01 compute-0 nova_compute[187439]: 2025-10-09 10:09:01.259 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:01 compute-0 podman[216427]: 2025-10-09 10:09:01.642915144 +0000 UTC m=+0.080402251 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  9 10:09:01 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:09:01 compute-0 nova_compute[187439]: 2025-10-09 10:09:01.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:02.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.275 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.275 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.275 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.276 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.276 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:09:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:02] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:09:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:02] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:09:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:02.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:02 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:09:02 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740802142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.670 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.938 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.939 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4645MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.940 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.941 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.993 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:09:02 compute-0 nova_compute[187439]: 2025-10-09 10:09:02.993 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:09:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:03 compute-0 nova_compute[187439]: 2025-10-09 10:09:03.006 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:09:03 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:09:03 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2969860949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:09:03 compute-0 nova_compute[187439]: 2025-10-09 10:09:03.386 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:09:03 compute-0 nova_compute[187439]: 2025-10-09 10:09:03.391 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:09:03 compute-0 nova_compute[187439]: 2025-10-09 10:09:03.413 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:09:03 compute-0 nova_compute[187439]: 2025-10-09 10:09:03.415 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:09:03 compute-0 nova_compute[187439]: 2025-10-09 10:09:03.415 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:09:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:03.589Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:03.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:03.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:03.603Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:03 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:04.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:09:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:09:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 718 B/s rd, 0 op/s
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:09:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:04.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:09:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:09:04 compute-0 nova_compute[187439]: 2025-10-09 10:09:04.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:04 compute-0 podman[216653]: 2025-10-09 10:09:04.78020377 +0000 UTC m=+0.037305225 container create ed4d5e1fb0ec57311aaa0ff34d8c517a3084aa82cdf90d8f9218929c788c655f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_heyrovsky, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 10:09:04 compute-0 systemd[1]: Started libpod-conmon-ed4d5e1fb0ec57311aaa0ff34d8c517a3084aa82cdf90d8f9218929c788c655f.scope.
Oct  9 10:09:04 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:09:04 compute-0 podman[216653]: 2025-10-09 10:09:04.853442551 +0000 UTC m=+0.110544026 container init ed4d5e1fb0ec57311aaa0ff34d8c517a3084aa82cdf90d8f9218929c788c655f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:09:04 compute-0 podman[216653]: 2025-10-09 10:09:04.860260207 +0000 UTC m=+0.117361662 container start ed4d5e1fb0ec57311aaa0ff34d8c517a3084aa82cdf90d8f9218929c788c655f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_heyrovsky, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:09:04 compute-0 podman[216653]: 2025-10-09 10:09:04.765091703 +0000 UTC m=+0.022193178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:09:04 compute-0 podman[216653]: 2025-10-09 10:09:04.861993476 +0000 UTC m=+0.119094921 container attach ed4d5e1fb0ec57311aaa0ff34d8c517a3084aa82cdf90d8f9218929c788c655f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_heyrovsky, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:09:04 compute-0 bold_heyrovsky[216666]: 167 167
Oct  9 10:09:04 compute-0 systemd[1]: libpod-ed4d5e1fb0ec57311aaa0ff34d8c517a3084aa82cdf90d8f9218929c788c655f.scope: Deactivated successfully.
Oct  9 10:09:04 compute-0 conmon[216666]: conmon ed4d5e1fb0ec57311aaa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed4d5e1fb0ec57311aaa0ff34d8c517a3084aa82cdf90d8f9218929c788c655f.scope/container/memory.events
Oct  9 10:09:04 compute-0 podman[216653]: 2025-10-09 10:09:04.866852348 +0000 UTC m=+0.123953803 container died ed4d5e1fb0ec57311aaa0ff34d8c517a3084aa82cdf90d8f9218929c788c655f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 10:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5170f0f4ab90df6764766ace746ff1bcc3b9d7481d17b7bcd397766572a2f5f-merged.mount: Deactivated successfully.
Oct  9 10:09:04 compute-0 podman[216653]: 2025-10-09 10:09:04.8937514 +0000 UTC m=+0.150852855 container remove ed4d5e1fb0ec57311aaa0ff34d8c517a3084aa82cdf90d8f9218929c788c655f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:09:04 compute-0 systemd[1]: libpod-conmon-ed4d5e1fb0ec57311aaa0ff34d8c517a3084aa82cdf90d8f9218929c788c655f.scope: Deactivated successfully.
Oct  9 10:09:05 compute-0 podman[216688]: 2025-10-09 10:09:05.04961309 +0000 UTC m=+0.043356116 container create 9e7e214271b66e4e94a6f26df1be3e5394c7aa6c201fb5e56e9c7845f078fefc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_elbakyan, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 10:09:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:09:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:05 compute-0 systemd[1]: Started libpod-conmon-9e7e214271b66e4e94a6f26df1be3e5394c7aa6c201fb5e56e9c7845f078fefc.scope.
Oct  9 10:09:05 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:09:05 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ba68917d3d17c0457a3b6be5ce2995aa0442ff581238117972340f58874e8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ba68917d3d17c0457a3b6be5ce2995aa0442ff581238117972340f58874e8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ba68917d3d17c0457a3b6be5ce2995aa0442ff581238117972340f58874e8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ba68917d3d17c0457a3b6be5ce2995aa0442ff581238117972340f58874e8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ba68917d3d17c0457a3b6be5ce2995aa0442ff581238117972340f58874e8e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:05 compute-0 podman[216688]: 2025-10-09 10:09:05.126028833 +0000 UTC m=+0.119771858 container init 9e7e214271b66e4e94a6f26df1be3e5394c7aa6c201fb5e56e9c7845f078fefc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 10:09:05 compute-0 podman[216688]: 2025-10-09 10:09:05.034126568 +0000 UTC m=+0.027869613 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:09:05 compute-0 podman[216688]: 2025-10-09 10:09:05.133854829 +0000 UTC m=+0.127597845 container start 9e7e214271b66e4e94a6f26df1be3e5394c7aa6c201fb5e56e9c7845f078fefc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_elbakyan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 10:09:05 compute-0 podman[216688]: 2025-10-09 10:09:05.135481959 +0000 UTC m=+0.129225004 container attach 9e7e214271b66e4e94a6f26df1be3e5394c7aa6c201fb5e56e9c7845f078fefc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_elbakyan, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct  9 10:09:05 compute-0 nova_compute[187439]: 2025-10-09 10:09:05.411 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:05 compute-0 nova_compute[187439]: 2025-10-09 10:09:05.414 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:05 compute-0 flamboyant_elbakyan[216701]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:09:05 compute-0 flamboyant_elbakyan[216701]: --> All data devices are unavailable
Oct  9 10:09:05 compute-0 systemd[1]: libpod-9e7e214271b66e4e94a6f26df1be3e5394c7aa6c201fb5e56e9c7845f078fefc.scope: Deactivated successfully.
Oct  9 10:09:05 compute-0 podman[216716]: 2025-10-09 10:09:05.501854507 +0000 UTC m=+0.025683160 container died 9e7e214271b66e4e94a6f26df1be3e5394c7aa6c201fb5e56e9c7845f078fefc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_elbakyan, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 10:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1ba68917d3d17c0457a3b6be5ce2995aa0442ff581238117972340f58874e8e-merged.mount: Deactivated successfully.
Oct  9 10:09:05 compute-0 podman[216716]: 2025-10-09 10:09:05.53680399 +0000 UTC m=+0.060632632 container remove 9e7e214271b66e4e94a6f26df1be3e5394c7aa6c201fb5e56e9c7845f078fefc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_elbakyan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Oct  9 10:09:05 compute-0 systemd[1]: libpod-conmon-9e7e214271b66e4e94a6f26df1be3e5394c7aa6c201fb5e56e9c7845f078fefc.scope: Deactivated successfully.
Oct  9 10:09:06 compute-0 podman[216808]: 2025-10-09 10:09:06.087004863 +0000 UTC m=+0.037829725 container create 6332aba1826c86cf14de778552b60e0239852c2beb7f4218bb187a2b61266634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 10:09:06 compute-0 systemd[1]: Started libpod-conmon-6332aba1826c86cf14de778552b60e0239852c2beb7f4218bb187a2b61266634.scope.
Oct  9 10:09:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:06.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:09:06 compute-0 podman[216808]: 2025-10-09 10:09:06.161570698 +0000 UTC m=+0.112395560 container init 6332aba1826c86cf14de778552b60e0239852c2beb7f4218bb187a2b61266634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_khayyam, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 10:09:06 compute-0 podman[216808]: 2025-10-09 10:09:06.070604808 +0000 UTC m=+0.021429680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:09:06 compute-0 podman[216808]: 2025-10-09 10:09:06.167954646 +0000 UTC m=+0.118779508 container start 6332aba1826c86cf14de778552b60e0239852c2beb7f4218bb187a2b61266634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_khayyam, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 10:09:06 compute-0 podman[216808]: 2025-10-09 10:09:06.169523704 +0000 UTC m=+0.120348567 container attach 6332aba1826c86cf14de778552b60e0239852c2beb7f4218bb187a2b61266634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_khayyam, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 10:09:06 compute-0 happy_khayyam[216823]: 167 167
Oct  9 10:09:06 compute-0 systemd[1]: libpod-6332aba1826c86cf14de778552b60e0239852c2beb7f4218bb187a2b61266634.scope: Deactivated successfully.
Oct  9 10:09:06 compute-0 podman[216808]: 2025-10-09 10:09:06.173039033 +0000 UTC m=+0.123863895 container died 6332aba1826c86cf14de778552b60e0239852c2beb7f4218bb187a2b61266634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c2d6bc12b16ef5c8392f4e4273c14c0883222689d1d960839b022daed87d7c3-merged.mount: Deactivated successfully.
Oct  9 10:09:06 compute-0 podman[216808]: 2025-10-09 10:09:06.195644469 +0000 UTC m=+0.146469331 container remove 6332aba1826c86cf14de778552b60e0239852c2beb7f4218bb187a2b61266634 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  9 10:09:06 compute-0 systemd[1]: libpod-conmon-6332aba1826c86cf14de778552b60e0239852c2beb7f4218bb187a2b61266634.scope: Deactivated successfully.
Oct  9 10:09:06 compute-0 nova_compute[187439]: 2025-10-09 10:09:06.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct  9 10:09:06 compute-0 podman[216846]: 2025-10-09 10:09:06.354422488 +0000 UTC m=+0.044774640 container create 2152c7affd94875208a9323b83a7fc57217b5085e7211ba2fb6a61546a3a00da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_herschel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:09:06 compute-0 systemd[1]: Started libpod-conmon-2152c7affd94875208a9323b83a7fc57217b5085e7211ba2fb6a61546a3a00da.scope.
Oct  9 10:09:06 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671eab336ad7dcb2f6bf2e8fc724422439556ab30ebefbc6a81791e8c243db52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671eab336ad7dcb2f6bf2e8fc724422439556ab30ebefbc6a81791e8c243db52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671eab336ad7dcb2f6bf2e8fc724422439556ab30ebefbc6a81791e8c243db52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671eab336ad7dcb2f6bf2e8fc724422439556ab30ebefbc6a81791e8c243db52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:06 compute-0 podman[216846]: 2025-10-09 10:09:06.426728201 +0000 UTC m=+0.117080353 container init 2152c7affd94875208a9323b83a7fc57217b5085e7211ba2fb6a61546a3a00da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_herschel, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 10:09:06 compute-0 podman[216846]: 2025-10-09 10:09:06.333376913 +0000 UTC m=+0.023729086 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:09:06 compute-0 podman[216846]: 2025-10-09 10:09:06.439765855 +0000 UTC m=+0.130118007 container start 2152c7affd94875208a9323b83a7fc57217b5085e7211ba2fb6a61546a3a00da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 10:09:06 compute-0 podman[216846]: 2025-10-09 10:09:06.444196389 +0000 UTC m=+0.134548542 container attach 2152c7affd94875208a9323b83a7fc57217b5085e7211ba2fb6a61546a3a00da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Oct  9 10:09:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:06.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:06 compute-0 determined_herschel[216859]: {
Oct  9 10:09:06 compute-0 determined_herschel[216859]:    "1": [
Oct  9 10:09:06 compute-0 determined_herschel[216859]:        {
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "devices": [
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "/dev/loop3"
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            ],
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "lv_name": "ceph_lv0",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "lv_size": "21470642176",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "name": "ceph_lv0",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "tags": {
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.cluster_name": "ceph",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.crush_device_class": "",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.encrypted": "0",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.osd_id": "1",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.type": "block",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.vdo": "0",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:                "ceph.with_tpm": "0"
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            },
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "type": "block",
Oct  9 10:09:06 compute-0 determined_herschel[216859]:            "vg_name": "ceph_vg0"
Oct  9 10:09:06 compute-0 determined_herschel[216859]:        }
Oct  9 10:09:06 compute-0 determined_herschel[216859]:    ]
Oct  9 10:09:06 compute-0 determined_herschel[216859]: }
Oct  9 10:09:06 compute-0 systemd[1]: libpod-2152c7affd94875208a9323b83a7fc57217b5085e7211ba2fb6a61546a3a00da.scope: Deactivated successfully.
Oct  9 10:09:06 compute-0 podman[216869]: 2025-10-09 10:09:06.774104369 +0000 UTC m=+0.026589129 container died 2152c7affd94875208a9323b83a7fc57217b5085e7211ba2fb6a61546a3a00da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 10:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-671eab336ad7dcb2f6bf2e8fc724422439556ab30ebefbc6a81791e8c243db52-merged.mount: Deactivated successfully.
Oct  9 10:09:06 compute-0 podman[216869]: 2025-10-09 10:09:06.802489493 +0000 UTC m=+0.054974243 container remove 2152c7affd94875208a9323b83a7fc57217b5085e7211ba2fb6a61546a3a00da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_herschel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 10:09:06 compute-0 systemd[1]: libpod-conmon-2152c7affd94875208a9323b83a7fc57217b5085e7211ba2fb6a61546a3a00da.scope: Deactivated successfully.
Oct  9 10:09:06 compute-0 nova_compute[187439]: 2025-10-09 10:09:06.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:07.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:07.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:07.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:07.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:07 compute-0 podman[216961]: 2025-10-09 10:09:07.370630457 +0000 UTC m=+0.037062449 container create 83984f6281d02e642fb389e7c3d44ab4d6c63ef062b1fb6239ad4e344f4609a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  9 10:09:07 compute-0 systemd[1]: Started libpod-conmon-83984f6281d02e642fb389e7c3d44ab4d6c63ef062b1fb6239ad4e344f4609a6.scope.
Oct  9 10:09:07 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:09:07 compute-0 podman[216961]: 2025-10-09 10:09:07.438639457 +0000 UTC m=+0.105071438 container init 83984f6281d02e642fb389e7c3d44ab4d6c63ef062b1fb6239ad4e344f4609a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:09:07 compute-0 podman[216961]: 2025-10-09 10:09:07.444984141 +0000 UTC m=+0.111416122 container start 83984f6281d02e642fb389e7c3d44ab4d6c63ef062b1fb6239ad4e344f4609a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lamport, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 10:09:07 compute-0 podman[216961]: 2025-10-09 10:09:07.446938506 +0000 UTC m=+0.113370488 container attach 83984f6281d02e642fb389e7c3d44ab4d6c63ef062b1fb6239ad4e344f4609a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lamport, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:09:07 compute-0 compassionate_lamport[216974]: 167 167
Oct  9 10:09:07 compute-0 systemd[1]: libpod-83984f6281d02e642fb389e7c3d44ab4d6c63ef062b1fb6239ad4e344f4609a6.scope: Deactivated successfully.
Oct  9 10:09:07 compute-0 podman[216961]: 2025-10-09 10:09:07.451868592 +0000 UTC m=+0.118300574 container died 83984f6281d02e642fb389e7c3d44ab4d6c63ef062b1fb6239ad4e344f4609a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:09:07 compute-0 podman[216961]: 2025-10-09 10:09:07.357571141 +0000 UTC m=+0.024003133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-30a5af3eafe3245afc7cef5c38632b97a9b3b1f0b7104ae0ceae38d8a08bbf8d-merged.mount: Deactivated successfully.
Oct  9 10:09:07 compute-0 podman[216961]: 2025-10-09 10:09:07.475186221 +0000 UTC m=+0.141618203 container remove 83984f6281d02e642fb389e7c3d44ab4d6c63ef062b1fb6239ad4e344f4609a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 10:09:07 compute-0 systemd[1]: libpod-conmon-83984f6281d02e642fb389e7c3d44ab4d6c63ef062b1fb6239ad4e344f4609a6.scope: Deactivated successfully.
Oct  9 10:09:07 compute-0 podman[216996]: 2025-10-09 10:09:07.644837473 +0000 UTC m=+0.036861719 container create 6e895557fd32a76f3031c81452a5bcfc94065a2bdc1c88434d69877254683ea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldwasser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:09:07 compute-0 systemd[1]: Started libpod-conmon-6e895557fd32a76f3031c81452a5bcfc94065a2bdc1c88434d69877254683ea9.scope.
Oct  9 10:09:07 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9356895e8b7f86d42fac7c5005d0c1326be8727e2944ebb459ae24729440c67a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9356895e8b7f86d42fac7c5005d0c1326be8727e2944ebb459ae24729440c67a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9356895e8b7f86d42fac7c5005d0c1326be8727e2944ebb459ae24729440c67a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9356895e8b7f86d42fac7c5005d0c1326be8727e2944ebb459ae24729440c67a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:09:07 compute-0 podman[216996]: 2025-10-09 10:09:07.712539045 +0000 UTC m=+0.104563301 container init 6e895557fd32a76f3031c81452a5bcfc94065a2bdc1c88434d69877254683ea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  9 10:09:07 compute-0 podman[216996]: 2025-10-09 10:09:07.721794416 +0000 UTC m=+0.113818652 container start 6e895557fd32a76f3031c81452a5bcfc94065a2bdc1c88434d69877254683ea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldwasser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:09:07 compute-0 podman[216996]: 2025-10-09 10:09:07.723696884 +0000 UTC m=+0.115721121 container attach 6e895557fd32a76f3031c81452a5bcfc94065a2bdc1c88434d69877254683ea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:09:07 compute-0 podman[216996]: 2025-10-09 10:09:07.63188516 +0000 UTC m=+0.023909417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:09:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:08.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:08 compute-0 nova_compute[187439]: 2025-10-09 10:09:08.247 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:08 compute-0 nova_compute[187439]: 2025-10-09 10:09:08.248 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:09:08 compute-0 unruffled_goldwasser[217009]: {}
Oct  9 10:09:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct  9 10:09:08 compute-0 lvm[217087]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:09:08 compute-0 lvm[217087]: VG ceph_vg0 finished
Oct  9 10:09:08 compute-0 systemd[1]: libpod-6e895557fd32a76f3031c81452a5bcfc94065a2bdc1c88434d69877254683ea9.scope: Deactivated successfully.
Oct  9 10:09:08 compute-0 podman[216996]: 2025-10-09 10:09:08.295641659 +0000 UTC m=+0.687665895 container died 6e895557fd32a76f3031c81452a5bcfc94065a2bdc1c88434d69877254683ea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9356895e8b7f86d42fac7c5005d0c1326be8727e2944ebb459ae24729440c67a-merged.mount: Deactivated successfully.
Oct  9 10:09:08 compute-0 podman[216996]: 2025-10-09 10:09:08.325929942 +0000 UTC m=+0.717954178 container remove 6e895557fd32a76f3031c81452a5bcfc94065a2bdc1c88434d69877254683ea9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_goldwasser, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:09:08 compute-0 systemd[1]: libpod-conmon-6e895557fd32a76f3031c81452a5bcfc94065a2bdc1c88434d69877254683ea9.scope: Deactivated successfully.
Oct  9 10:09:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:09:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:08 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:09:08 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:08.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:08.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:08.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:08.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:08.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:09 compute-0 nova_compute[187439]: 2025-10-09 10:09:09.249 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:09 compute-0 nova_compute[187439]: 2025-10-09 10:09:09.249 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:09:09 compute-0 nova_compute[187439]: 2025-10-09 10:09:09.249 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:09:09 compute-0 nova_compute[187439]: 2025-10-09 10:09:09.259 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:09:09 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:09 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:09 compute-0 nova_compute[187439]: 2025-10-09 10:09:09.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:09:10.120 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:09:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:09:10.121 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:09:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:09:10.121 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:09:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:10.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 718 B/s rd, 0 op/s
Oct  9 10:09:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:10.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:10 compute-0 podman[217126]: 2025-10-09 10:09:10.621751357 +0000 UTC m=+0.053443275 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  9 10:09:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:09:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3764022889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:09:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:09:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3764022889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:09:11 compute-0 nova_compute[187439]: 2025-10-09 10:09:11.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:12.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:12] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:09:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:12] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Oct  9 10:09:12 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct  9 10:09:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:12.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:13.590Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:13.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:13.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:13.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:14.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:14 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  9 10:09:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:14.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:14 compute-0 nova_compute[187439]: 2025-10-09 10:09:14.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:15 compute-0 podman[217171]: 2025-10-09 10:09:15.603702282 +0000 UTC m=+0.045590048 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 10:09:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:16.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:16 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:09:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:16.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:16 compute-0 nova_compute[187439]: 2025-10-09 10:09:16.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:17.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:17.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:17.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:17.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:18.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:18 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:18.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:18.951Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:18.957Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:18.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:18.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:09:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:09:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:09:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:09:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:09:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:09:19 compute-0 nova_compute[187439]: 2025-10-09 10:09:19.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:09:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:09:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:20 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:20.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:21 compute-0 nova_compute[187439]: 2025-10-09 10:09:21.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:22.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:22] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:09:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:22] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:09:22 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:09:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:23.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:23.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:23.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:23.598Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:24.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:24 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000021s ======
Oct  9 10:09:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:24.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct  9 10:09:24 compute-0 nova_compute[187439]: 2025-10-09 10:09:24.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:26.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:26 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:09:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:26.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:26 compute-0 podman[217201]: 2025-10-09 10:09:26.657841498 +0000 UTC m=+0.084286565 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:09:26 compute-0 nova_compute[187439]: 2025-10-09 10:09:26.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:27.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:27.270Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:27.274Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:27.304Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:28.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:28 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:28.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:28.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:28.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:28.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:28.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:29 compute-0 nova_compute[187439]: 2025-10-09 10:09:29.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:30.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:30 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:30.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:31 compute-0 nova_compute[187439]: 2025-10-09 10:09:31.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:32.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:32] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:09:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:32] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Oct  9 10:09:32 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:09:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:32.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:32 compute-0 podman[217230]: 2025-10-09 10:09:32.61389247 +0000 UTC m=+0.051930484 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3)
Oct  9 10:09:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:33.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:33.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:33.599Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:33.600Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:34.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:34 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:34.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:09:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:09:34 compute-0 nova_compute[187439]: 2025-10-09 10:09:34.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:36.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:36 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:09:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:36.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:36 compute-0 nova_compute[187439]: 2025-10-09 10:09:36.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:37.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:37.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:37.131Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:37.131Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:38.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:38 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:38.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:38.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:38.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:38.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:38.978Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:39 compute-0 nova_compute[187439]: 2025-10-09 10:09:39.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:40.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:40 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:40.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:41 compute-0 podman[217281]: 2025-10-09 10:09:41.613748909 +0000 UTC m=+0.049068568 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  9 10:09:41 compute-0 nova_compute[187439]: 2025-10-09 10:09:41.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:42] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Oct  9 10:09:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:42] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Oct  9 10:09:42 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:09:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:42.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:43.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:43.593Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:43.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:43.597Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:44.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:44 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:44.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:44 compute-0 nova_compute[187439]: 2025-10-09 10:09:44.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:46.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:46 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:09:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:46.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:46 compute-0 podman[217303]: 2025-10-09 10:09:46.611662514 +0000 UTC m=+0.049222547 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:09:46 compute-0 nova_compute[187439]: 2025-10-09 10:09:46.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:47.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:47.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:47.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:47.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:48 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:48.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:48 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:48.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:48.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:48.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:48.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:48.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:09:49
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta', '.mgr']
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:09:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:09:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:09:49 compute-0 nova_compute[187439]: 2025-10-09 10:09:49.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:09:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:09:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:50.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:50 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:50.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:51 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:51 compute-0 nova_compute[187439]: 2025-10-09 10:09:51.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:52.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:52 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:52] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:09:52 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:09:52] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:09:52 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:09:52 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:52 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:52 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:52.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:52 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:53 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:53.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:53.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:53.602Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:53 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:53.603Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:54.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:54 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:54 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:54 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:54 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:54.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:54 compute-0 nova_compute[187439]: 2025-10-09 10:09:54.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:56.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:56 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:09:56 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:56 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:56 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:56 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:56.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:56 compute-0 nova_compute[187439]: 2025-10-09 10:09:56.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:57.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:57.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:57.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:57 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:57.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:57 compute-0 podman[217356]: 2025-10-09 10:09:57.64233353 +0000 UTC m=+0.075846420 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  9 10:09:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:57 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:09:58 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:58.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:58 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:09:58 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:09:58 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:58 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:58.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:58.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:58.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:58.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:58 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:09:58.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] _maybe_adjust
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  9 10:09:59 compute-0 ceph-mgr[4772]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  9 10:09:59 compute-0 nova_compute[187439]: 2025-10-09 10:09:59.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:00 compute-0 ceph-mon[4497]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 10:10:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:00.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:00 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:00 compute-0 ceph-mon[4497]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 10:10:00 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:00 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:00 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:00.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:01 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:01 compute-0 nova_compute[187439]: 2025-10-09 10:10:01.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:02.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:02 compute-0 nova_compute[187439]: 2025-10-09 10:10:02.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:02 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:10:02] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:10:02 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:10:02] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Oct  9 10:10:02 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:10:02 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:02 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:02 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:02.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:02 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:03 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:03 compute-0 nova_compute[187439]: 2025-10-09 10:10:03.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:03 compute-0 systemd[1]: Starting system activity accounting tool...
Oct  9 10:10:03 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 10:10:03 compute-0 systemd[1]: Finished system activity accounting tool.
Oct  9 10:10:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:03.594Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:03 compute-0 podman[217385]: 2025-10-09 10:10:03.612375753 +0000 UTC m=+0.051337676 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  9 10:10:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:03.665Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:03.666Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:03 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:03.666Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:04.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.242 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.245 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.245 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.259 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.260 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.260 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.260 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.260 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:10:04 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:04 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:04 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:04 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:04.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:10:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:10:04 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:10:04 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732046277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.628 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.874 2 WARNING nova.virt.libvirt.driver [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.876 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4613MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.876 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.876 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.927 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.927 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:10:04 compute-0 nova_compute[187439]: 2025-10-09 10:10:04.938 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:10:05 compute-0 nova_compute[187439]: 2025-10-09 10:10:05.293 2 DEBUG oslo_concurrency.processutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:10:05 compute-0 nova_compute[187439]: 2025-10-09 10:10:05.298 2 DEBUG nova.compute.provider_tree [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed in ProviderTree for provider: f97cf330-2912-473f-81a8-cda2f8811838 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:10:05 compute-0 nova_compute[187439]: 2025-10-09 10:10:05.309 2 DEBUG nova.scheduler.client.report [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Inventory has not changed for provider f97cf330-2912-473f-81a8-cda2f8811838 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:10:05 compute-0 nova_compute[187439]: 2025-10-09 10:10:05.311 2 DEBUG nova.compute.resource_tracker [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:10:05 compute-0 nova_compute[187439]: 2025-10-09 10:10:05.311 2 DEBUG oslo_concurrency.lockutils [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:10:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:06.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:06 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:10:06 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:06 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:06 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:06 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:06.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:06 compute-0 nova_compute[187439]: 2025-10-09 10:10:06.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:07.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:07.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:07.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:07 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:07.132Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:07 compute-0 nova_compute[187439]: 2025-10-09 10:10:07.312 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:07 compute-0 nova_compute[187439]: 2025-10-09 10:10:07.312 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:07 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:08 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:08.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:08 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:08 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:08 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:08 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:08.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:08.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:08.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:08.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:08 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:08.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:09 compute-0 nova_compute[187439]: 2025-10-09 10:10:09.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Oct  9 10:10:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:09 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Oct  9 10:10:09 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:10:10.122 92053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:10:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:10:10.122 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:10:10 compute-0 ovn_metadata_agent[92048]: 2025-10-09 10:10:10.122 92053 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:10:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:10.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:10 compute-0 nova_compute[187439]: 2025-10-09 10:10:10.246 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:10 compute-0 nova_compute[187439]: 2025-10-09 10:10:10.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:10:10 compute-0 nova_compute[187439]: 2025-10-09 10:10:10.246 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:10:10 compute-0 nova_compute[187439]: 2025-10-09 10:10:10.264 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:10:10 compute-0 nova_compute[187439]: 2025-10-09 10:10:10.264 2 DEBUG oslo_service.periodic_task [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:10 compute-0 nova_compute[187439]: 2025-10-09 10:10:10.264 2 DEBUG nova.compute.manager [None req-65bfbc08-8d43-4120-874b-79b696a7ffd2 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:10:10 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Oct  9 10:10:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Oct  9 10:10:10 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:10 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:10 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:10.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:10 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:10:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:10:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Oct  9 10:10:11 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:10:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  9 10:10:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:10:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:10:11 compute-0 podman[217616]: 2025-10-09 10:10:11.5137309 +0000 UTC m=+0.041991422 container create c963d7586333bfe50251992883adcfb71207ad85fd4a42000f4313d9e234f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 10:10:11 compute-0 systemd[1]: Started libpod-conmon-c963d7586333bfe50251992883adcfb71207ad85fd4a42000f4313d9e234f2f4.scope.
Oct  9 10:10:11 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:10:11 compute-0 podman[217616]: 2025-10-09 10:10:11.58660254 +0000 UTC m=+0.114863082 container init c963d7586333bfe50251992883adcfb71207ad85fd4a42000f4313d9e234f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_babbage, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.587992) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611588070, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1788, "num_deletes": 504, "total_data_size": 2847968, "memory_usage": 2888512, "flush_reason": "Manual Compaction"}
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct  9 10:10:11 compute-0 podman[217616]: 2025-10-09 10:10:11.496436778 +0000 UTC m=+0.024697301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:10:11 compute-0 podman[217616]: 2025-10-09 10:10:11.59561298 +0000 UTC m=+0.123873502 container start c963d7586333bfe50251992883adcfb71207ad85fd4a42000f4313d9e234f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611597299, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 2787071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29533, "largest_seqno": 31319, "table_properties": {"data_size": 2779283, "index_size": 4154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 18929, "raw_average_key_size": 18, "raw_value_size": 2761688, "raw_average_value_size": 2764, "num_data_blocks": 179, "num_entries": 999, "num_filter_entries": 999, "num_deletions": 504, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004469, "oldest_key_time": 1760004469, "file_creation_time": 1760004611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 9327 microseconds, and 7515 cpu microseconds.
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.597332) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 2787071 bytes OK
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.597353) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.597764) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.597775) EVENT_LOG_v1 {"time_micros": 1760004611597772, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.597797) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2839448, prev total WAL file size 2839448, number of live WAL files 2.
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:10:11 compute-0 podman[217616]: 2025-10-09 10:10:11.598608228 +0000 UTC m=+0.126868750 container attach c963d7586333bfe50251992883adcfb71207ad85fd4a42000f4313d9e234f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_babbage, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.598838) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353038' seq:0, type:0; will stop at (end)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(2721KB)], [65(15MB)]
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611598886, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 19141802, "oldest_snapshot_seqno": -1}
Oct  9 10:10:11 compute-0 ecstatic_babbage[217629]: 167 167
Oct  9 10:10:11 compute-0 systemd[1]: libpod-c963d7586333bfe50251992883adcfb71207ad85fd4a42000f4313d9e234f2f4.scope: Deactivated successfully.
Oct  9 10:10:11 compute-0 podman[217616]: 2025-10-09 10:10:11.603285067 +0000 UTC m=+0.131545589 container died c963d7586333bfe50251992883adcfb71207ad85fd4a42000f4313d9e234f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_babbage, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 10:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-59f32da27fa23e18c6c9661f5d05f94ff3ee793e298cf4e00e54aa4caa276117-merged.mount: Deactivated successfully.
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6316 keys, 13636219 bytes, temperature: kUnknown
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611639547, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 13636219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13594805, "index_size": 24536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 164326, "raw_average_key_size": 26, "raw_value_size": 13481417, "raw_average_value_size": 2134, "num_data_blocks": 975, "num_entries": 6316, "num_filter_entries": 6316, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002419, "oldest_key_time": 0, "file_creation_time": 1760004611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba1e7fee-fdf5-47b8-8729-cc5ad901148d", "db_session_id": "REEUAVY01GI85Z7KU96K", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.639792) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 13636219 bytes
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.643001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 469.6 rd, 334.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 15.6 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(11.8) write-amplify(4.9) OK, records in: 7343, records dropped: 1027 output_compression: NoCompression
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.643018) EVENT_LOG_v1 {"time_micros": 1760004611643009, "job": 36, "event": "compaction_finished", "compaction_time_micros": 40766, "compaction_time_cpu_micros": 31882, "output_level": 6, "num_output_files": 1, "total_output_size": 13636219, "num_input_records": 7343, "num_output_records": 6316, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611643466, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct  9 10:10:11 compute-0 podman[217616]: 2025-10-09 10:10:11.64384062 +0000 UTC m=+0.172101142 container remove c963d7586333bfe50251992883adcfb71207ad85fd4a42000f4313d9e234f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611645381, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.598730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.647390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.647407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.647409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.647410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-0 ceph-mon[4497]: rocksdb: (Original Log Time 2025/10/09-10:10:11.647411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-0 systemd[1]: libpod-conmon-c963d7586333bfe50251992883adcfb71207ad85fd4a42000f4313d9e234f2f4.scope: Deactivated successfully.
Oct  9 10:10:11 compute-0 podman[217647]: 2025-10-09 10:10:11.740291242 +0000 UTC m=+0.055970522 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  9 10:10:11 compute-0 podman[217668]: 2025-10-09 10:10:11.806511109 +0000 UTC m=+0.052664980 container create 45a2a6079be342900541fdc931e5a0c3011dfdedd9ba0403769e562c8857eed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wright, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:10:11 compute-0 systemd[1]: Started libpod-conmon-45a2a6079be342900541fdc931e5a0c3011dfdedd9ba0403769e562c8857eed9.scope.
Oct  9 10:10:11 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08adab7669b6b2e5d040e634ae9345cc7fba0567c77dea250a2030c9d531095/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08adab7669b6b2e5d040e634ae9345cc7fba0567c77dea250a2030c9d531095/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08adab7669b6b2e5d040e634ae9345cc7fba0567c77dea250a2030c9d531095/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08adab7669b6b2e5d040e634ae9345cc7fba0567c77dea250a2030c9d531095/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08adab7669b6b2e5d040e634ae9345cc7fba0567c77dea250a2030c9d531095/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:11 compute-0 podman[217668]: 2025-10-09 10:10:11.782029916 +0000 UTC m=+0.028183807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:10:11 compute-0 podman[217668]: 2025-10-09 10:10:11.877450646 +0000 UTC m=+0.123604526 container init 45a2a6079be342900541fdc931e5a0c3011dfdedd9ba0403769e562c8857eed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:10:11 compute-0 podman[217668]: 2025-10-09 10:10:11.884476213 +0000 UTC m=+0.130630083 container start 45a2a6079be342900541fdc931e5a0c3011dfdedd9ba0403769e562c8857eed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:10:11 compute-0 podman[217668]: 2025-10-09 10:10:11.886325351 +0000 UTC m=+0.132479220 container attach 45a2a6079be342900541fdc931e5a0c3011dfdedd9ba0403769e562c8857eed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wright, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 10:10:11 compute-0 nova_compute[187439]: 2025-10-09 10:10:11.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:12 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:10:12 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:12 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:12 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:10:12 compute-0 distracted_wright[217681]: --> passed data devices: 0 physical, 1 LVM
Oct  9 10:10:12 compute-0 distracted_wright[217681]: --> All data devices are unavailable
Oct  9 10:10:12 compute-0 systemd[1]: libpod-45a2a6079be342900541fdc931e5a0c3011dfdedd9ba0403769e562c8857eed9.scope: Deactivated successfully.
Oct  9 10:10:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:12.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:12 compute-0 podman[217697]: 2025-10-09 10:10:12.243551352 +0000 UTC m=+0.027165356 container died 45a2a6079be342900541fdc931e5a0c3011dfdedd9ba0403769e562c8857eed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wright, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a08adab7669b6b2e5d040e634ae9345cc7fba0567c77dea250a2030c9d531095-merged.mount: Deactivated successfully.
Oct  9 10:10:12 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:10:12] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:10:12 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:10:12] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:10:12 compute-0 podman[217697]: 2025-10-09 10:10:12.304574791 +0000 UTC m=+0.088188795 container remove 45a2a6079be342900541fdc931e5a0c3011dfdedd9ba0403769e562c8857eed9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:10:12 compute-0 systemd[1]: libpod-conmon-45a2a6079be342900541fdc931e5a0c3011dfdedd9ba0403769e562c8857eed9.scope: Deactivated successfully.
Oct  9 10:10:12 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:12 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:12 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:12.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:12 compute-0 podman[217792]: 2025-10-09 10:10:12.844505368 +0000 UTC m=+0.035530980 container create 9b717e2eee2d844c5e724ea49e30f03f704ca7f3707e5b011fde2d58de20e278 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 10:10:12 compute-0 systemd[1]: Started libpod-conmon-9b717e2eee2d844c5e724ea49e30f03f704ca7f3707e5b011fde2d58de20e278.scope.
Oct  9 10:10:12 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:10:12 compute-0 podman[217792]: 2025-10-09 10:10:12.913386092 +0000 UTC m=+0.104411734 container init 9b717e2eee2d844c5e724ea49e30f03f704ca7f3707e5b011fde2d58de20e278 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:10:12 compute-0 podman[217792]: 2025-10-09 10:10:12.919587426 +0000 UTC m=+0.110613038 container start 9b717e2eee2d844c5e724ea49e30f03f704ca7f3707e5b011fde2d58de20e278 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 10:10:12 compute-0 podman[217792]: 2025-10-09 10:10:12.920998487 +0000 UTC m=+0.112024099 container attach 9b717e2eee2d844c5e724ea49e30f03f704ca7f3707e5b011fde2d58de20e278 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:10:12 compute-0 festive_hawking[217805]: 167 167
Oct  9 10:10:12 compute-0 systemd[1]: libpod-9b717e2eee2d844c5e724ea49e30f03f704ca7f3707e5b011fde2d58de20e278.scope: Deactivated successfully.
Oct  9 10:10:12 compute-0 podman[217792]: 2025-10-09 10:10:12.925738305 +0000 UTC m=+0.116763916 container died 9b717e2eee2d844c5e724ea49e30f03f704ca7f3707e5b011fde2d58de20e278 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 10:10:12 compute-0 podman[217792]: 2025-10-09 10:10:12.831257748 +0000 UTC m=+0.022283369 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ba96a18e6a651ef49c8416dee872af66961821282492f76cb0b023803748405-merged.mount: Deactivated successfully.
Oct  9 10:10:12 compute-0 podman[217792]: 2025-10-09 10:10:12.950729139 +0000 UTC m=+0.141754750 container remove 9b717e2eee2d844c5e724ea49e30f03f704ca7f3707e5b011fde2d58de20e278 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  9 10:10:12 compute-0 systemd[1]: libpod-conmon-9b717e2eee2d844c5e724ea49e30f03f704ca7f3707e5b011fde2d58de20e278.scope: Deactivated successfully.
Oct  9 10:10:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:12 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:13 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:13 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:10:13 compute-0 podman[217827]: 2025-10-09 10:10:13.100937267 +0000 UTC m=+0.035152336 container create 69663c5ca26d62f835d14cb2875c70252ad4fa5fb8ca544dd40ce2dc687cf77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_colden, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Oct  9 10:10:13 compute-0 systemd[1]: Started libpod-conmon-69663c5ca26d62f835d14cb2875c70252ad4fa5fb8ca544dd40ce2dc687cf77b.scope.
Oct  9 10:10:13 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3b1df4855e07547f6d851735b90522046a2084681a2321808907accbc83371/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3b1df4855e07547f6d851735b90522046a2084681a2321808907accbc83371/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3b1df4855e07547f6d851735b90522046a2084681a2321808907accbc83371/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3b1df4855e07547f6d851735b90522046a2084681a2321808907accbc83371/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:13 compute-0 podman[217827]: 2025-10-09 10:10:13.165708514 +0000 UTC m=+0.099923582 container init 69663c5ca26d62f835d14cb2875c70252ad4fa5fb8ca544dd40ce2dc687cf77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 10:10:13 compute-0 podman[217827]: 2025-10-09 10:10:13.170608774 +0000 UTC m=+0.104823841 container start 69663c5ca26d62f835d14cb2875c70252ad4fa5fb8ca544dd40ce2dc687cf77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 10:10:13 compute-0 podman[217827]: 2025-10-09 10:10:13.172375806 +0000 UTC m=+0.106590884 container attach 69663c5ca26d62f835d14cb2875c70252ad4fa5fb8ca544dd40ce2dc687cf77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_colden, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:10:13 compute-0 podman[217827]: 2025-10-09 10:10:13.087114213 +0000 UTC m=+0.021329291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:10:13 compute-0 magical_colden[217840]: {
Oct  9 10:10:13 compute-0 magical_colden[217840]:    "1": [
Oct  9 10:10:13 compute-0 magical_colden[217840]:        {
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "devices": [
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "/dev/loop3"
Oct  9 10:10:13 compute-0 magical_colden[217840]:            ],
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "lv_name": "ceph_lv0",
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "lv_size": "21470642176",
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c1284347-e90b-4f83-b56e-ee0190c7ef56,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "lv_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "name": "ceph_lv0",
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "tags": {
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.block_uuid": "X449jG-z613-vapX-dWin-DAoC-KnQw-t6opOj",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.cephx_lockbox_secret": "",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.cluster_name": "ceph",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.crush_device_class": "",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.encrypted": "0",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.osd_fsid": "c1284347-e90b-4f83-b56e-ee0190c7ef56",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.osd_id": "1",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.type": "block",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.vdo": "0",
Oct  9 10:10:13 compute-0 magical_colden[217840]:                "ceph.with_tpm": "0"
Oct  9 10:10:13 compute-0 magical_colden[217840]:            },
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "type": "block",
Oct  9 10:10:13 compute-0 magical_colden[217840]:            "vg_name": "ceph_vg0"
Oct  9 10:10:13 compute-0 magical_colden[217840]:        }
Oct  9 10:10:13 compute-0 magical_colden[217840]:    ]
Oct  9 10:10:13 compute-0 magical_colden[217840]: }
Oct  9 10:10:13 compute-0 systemd[1]: libpod-69663c5ca26d62f835d14cb2875c70252ad4fa5fb8ca544dd40ce2dc687cf77b.scope: Deactivated successfully.
Oct  9 10:10:13 compute-0 podman[217849]: 2025-10-09 10:10:13.5049643 +0000 UTC m=+0.024181928 container died 69663c5ca26d62f835d14cb2875c70252ad4fa5fb8ca544dd40ce2dc687cf77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae3b1df4855e07547f6d851735b90522046a2084681a2321808907accbc83371-merged.mount: Deactivated successfully.
Oct  9 10:10:13 compute-0 podman[217849]: 2025-10-09 10:10:13.530369615 +0000 UTC m=+0.049587225 container remove 69663c5ca26d62f835d14cb2875c70252ad4fa5fb8ca544dd40ce2dc687cf77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_colden, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:10:13 compute-0 systemd[1]: libpod-conmon-69663c5ca26d62f835d14cb2875c70252ad4fa5fb8ca544dd40ce2dc687cf77b.scope: Deactivated successfully.
Oct  9 10:10:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:13.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:13.608Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:13.609Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:13 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:13.610Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:14 compute-0 podman[217941]: 2025-10-09 10:10:14.09050914 +0000 UTC m=+0.038558398 container create dfbf97930638d3a5b66f5a10237d1694f4c1e7747fff0b70a7c3668a34967674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:10:14 compute-0 systemd[1]: Started libpod-conmon-dfbf97930638d3a5b66f5a10237d1694f4c1e7747fff0b70a7c3668a34967674.scope.
Oct  9 10:10:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:10:14 compute-0 podman[217941]: 2025-10-09 10:10:14.164687856 +0000 UTC m=+0.112737133 container init dfbf97930638d3a5b66f5a10237d1694f4c1e7747fff0b70a7c3668a34967674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 10:10:14 compute-0 podman[217941]: 2025-10-09 10:10:14.075914059 +0000 UTC m=+0.023963337 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:10:14 compute-0 podman[217941]: 2025-10-09 10:10:14.172345163 +0000 UTC m=+0.120394421 container start dfbf97930638d3a5b66f5a10237d1694f4c1e7747fff0b70a7c3668a34967674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:10:14 compute-0 podman[217941]: 2025-10-09 10:10:14.173852916 +0000 UTC m=+0.121902175 container attach dfbf97930638d3a5b66f5a10237d1694f4c1e7747fff0b70a7c3668a34967674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_brown, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:10:14 compute-0 compassionate_brown[217955]: 167 167
Oct  9 10:10:14 compute-0 podman[217941]: 2025-10-09 10:10:14.177330674 +0000 UTC m=+0.125379933 container died dfbf97930638d3a5b66f5a10237d1694f4c1e7747fff0b70a7c3668a34967674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:10:14 compute-0 systemd[1]: libpod-dfbf97930638d3a5b66f5a10237d1694f4c1e7747fff0b70a7c3668a34967674.scope: Deactivated successfully.
Oct  9 10:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5da23df3ba1911ed5d7a1a8e9c07d8647aa8c70ac9e00083052bcad93d1d9a18-merged.mount: Deactivated successfully.
Oct  9 10:10:14 compute-0 podman[217941]: 2025-10-09 10:10:14.20379658 +0000 UTC m=+0.151845838 container remove dfbf97930638d3a5b66f5a10237d1694f4c1e7747fff0b70a7c3668a34967674 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_brown, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:10:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:14.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:14 compute-0 systemd[1]: libpod-conmon-dfbf97930638d3a5b66f5a10237d1694f4c1e7747fff0b70a7c3668a34967674.scope: Deactivated successfully.
Oct  9 10:10:14 compute-0 podman[217978]: 2025-10-09 10:10:14.359032801 +0000 UTC m=+0.038347201 container create e37e3f50bb095bebe792be026bb8060c5ec5cd2610b421427b347a8498720e09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_zhukovsky, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 10:10:14 compute-0 systemd[1]: Started libpod-conmon-e37e3f50bb095bebe792be026bb8060c5ec5cd2610b421427b347a8498720e09.scope.
Oct  9 10:10:14 compute-0 systemd[1]: Started libcrun container.
Oct  9 10:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb087d9d8b318d10f76cdeb379daa97d7f2eec3aa3a4bf1bd35943fe23d4f32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb087d9d8b318d10f76cdeb379daa97d7f2eec3aa3a4bf1bd35943fe23d4f32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb087d9d8b318d10f76cdeb379daa97d7f2eec3aa3a4bf1bd35943fe23d4f32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb087d9d8b318d10f76cdeb379daa97d7f2eec3aa3a4bf1bd35943fe23d4f32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:10:14 compute-0 podman[217978]: 2025-10-09 10:10:14.344195683 +0000 UTC m=+0.023510083 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:10:14 compute-0 podman[217978]: 2025-10-09 10:10:14.443199169 +0000 UTC m=+0.122513569 container init e37e3f50bb095bebe792be026bb8060c5ec5cd2610b421427b347a8498720e09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct  9 10:10:14 compute-0 podman[217978]: 2025-10-09 10:10:14.451687765 +0000 UTC m=+0.131002155 container start e37e3f50bb095bebe792be026bb8060c5ec5cd2610b421427b347a8498720e09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_zhukovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:10:14 compute-0 podman[217978]: 2025-10-09 10:10:14.453107171 +0000 UTC m=+0.132421561 container attach e37e3f50bb095bebe792be026bb8060c5ec5cd2610b421427b347a8498720e09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_zhukovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:10:14 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:14 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:14 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:14.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:14 compute-0 nova_compute[187439]: 2025-10-09 10:10:14.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:15 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:10:15 compute-0 objective_zhukovsky[218006]: {}
Oct  9 10:10:15 compute-0 lvm[218095]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:10:15 compute-0 lvm[218095]: VG ceph_vg0 finished
Oct  9 10:10:15 compute-0 systemd[1]: libpod-e37e3f50bb095bebe792be026bb8060c5ec5cd2610b421427b347a8498720e09.scope: Deactivated successfully.
Oct  9 10:10:15 compute-0 podman[217978]: 2025-10-09 10:10:15.118437759 +0000 UTC m=+0.797752149 container died e37e3f50bb095bebe792be026bb8060c5ec5cd2610b421427b347a8498720e09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:10:15 compute-0 systemd[1]: libpod-e37e3f50bb095bebe792be026bb8060c5ec5cd2610b421427b347a8498720e09.scope: Consumed 1.142s CPU time.
Oct  9 10:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-afb087d9d8b318d10f76cdeb379daa97d7f2eec3aa3a4bf1bd35943fe23d4f32-merged.mount: Deactivated successfully.
Oct  9 10:10:15 compute-0 podman[217978]: 2025-10-09 10:10:15.144765414 +0000 UTC m=+0.824079804 container remove e37e3f50bb095bebe792be026bb8060c5ec5cd2610b421427b347a8498720e09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:10:15 compute-0 systemd[1]: libpod-conmon-e37e3f50bb095bebe792be026bb8060c5ec5cd2610b421427b347a8498720e09.scope: Deactivated successfully.
Oct  9 10:10:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Oct  9 10:10:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:15 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Oct  9 10:10:15 compute-0 ceph-mon[4497]: log_channel(audit) log [INF] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:16 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:16 compute-0 ceph-mon[4497]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000021s ======
Oct  9 10:10:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:16.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct  9 10:10:16 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:16 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:16 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:16 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:16.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:16 compute-0 nova_compute[187439]: 2025-10-09 10:10:16.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:17 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:10:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:17.127Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:17.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:17.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:17 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:17.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:17 compute-0 podman[218133]: 2025-10-09 10:10:17.639918799 +0000 UTC m=+0.068054889 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd)
Oct  9 10:10:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:17 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:18 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:18.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:18 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:18 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:18 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:18.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:18.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:18.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:18.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:18 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:18.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:19 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct  9 10:10:19 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:10:19 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:10:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:10:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:10:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:10:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:10:19 compute-0 nova_compute[187439]: 2025-10-09 10:10:19.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:10:19 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:10:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:20.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:20 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:20 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:20 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:20.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:21 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct  9 10:10:21 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:21 compute-0 nova_compute[187439]: 2025-10-09 10:10:21.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:22.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:22 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:10:22] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Oct  9 10:10:22 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:10:22] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Oct  9 10:10:22 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:22 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:22 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:22.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:22 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:23 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:23 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:23.596Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:23.603Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:23.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:23 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:23.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:24.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:24 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:24 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:24 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:24.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:24 compute-0 nova_compute[187439]: 2025-10-09 10:10:24.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:25 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:26.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:26 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:26 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:26 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:26 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:26.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:26 compute-0 nova_compute[187439]: 2025-10-09 10:10:26.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:27 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:10:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:27.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:27.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:27.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:27 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:27.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:27 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:28 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:28.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:28 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:28 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:28 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:28.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:28 compute-0 podman[218162]: 2025-10-09 10:10:28.643814828 +0000 UTC m=+0.080240948 container health_status 0bfeab26b90b90deb149774b070f1e56bc1c41f74389cd0b7d667ef1354ff962 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct  9 10:10:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:28.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:28.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:28.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:28 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:28.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:29 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:29 compute-0 nova_compute[187439]: 2025-10-09 10:10:29.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:30.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:30 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:30 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:30 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:30.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:31 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:10:31 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:32 compute-0 nova_compute[187439]: 2025-10-09 10:10:32.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:32.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:32 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:10:32] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Oct  9 10:10:32 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:10:32] "GET /metrics HTTP/1.1" 200 48529 "" "Prometheus/2.51.0"
Oct  9 10:10:32 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:32 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:32 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:32.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:32 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:33 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:33 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:33.597Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:33.629Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:33.630Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:33 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:33.631Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:34.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:34 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:10:34 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:10:34 compute-0 podman[218213]: 2025-10-09 10:10:34.591687477 +0000 UTC m=+0.050336137 container health_status 5df366ac4494d24362702b0e9607507b670ab1475b88d2d146e506c3aed457d9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 10:10:34 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:34 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:34 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:34.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:34 compute-0 nova_compute[187439]: 2025-10-09 10:10:34.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:35 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:36.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:36 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:36 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:36 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:36 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:36.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:37 compute-0 nova_compute[187439]: 2025-10-09 10:10:37.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:37 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:10:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:37.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:37.141Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:37.141Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:37 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:37.142Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:37 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:38 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:38 compute-0 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 10:10:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 10:10:38 compute-0 systemd-logind[798]: New session 45 of user zuul.
Oct  9 10:10:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 10:10:38 compute-0 systemd[1]: Starting User Manager for UID 1000...
Oct  9 10:10:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.002000022s ======
Oct  9 10:10:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:38.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000022s
Oct  9 10:10:38 compute-0 systemd[218241]: Queued start job for default target Main User Target.
Oct  9 10:10:38 compute-0 systemd[218241]: Created slice User Application Slice.
Oct  9 10:10:38 compute-0 systemd[218241]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:10:38 compute-0 systemd[218241]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 10:10:38 compute-0 systemd[218241]: Reached target Paths.
Oct  9 10:10:38 compute-0 systemd[218241]: Reached target Timers.
Oct  9 10:10:38 compute-0 systemd[218241]: Starting D-Bus User Message Bus Socket...
Oct  9 10:10:38 compute-0 systemd[218241]: Starting Create User's Volatile Files and Directories...
Oct  9 10:10:38 compute-0 systemd[218241]: Finished Create User's Volatile Files and Directories.
Oct  9 10:10:38 compute-0 systemd[218241]: Listening on D-Bus User Message Bus Socket.
Oct  9 10:10:38 compute-0 systemd[218241]: Reached target Sockets.
Oct  9 10:10:38 compute-0 systemd[218241]: Reached target Basic System.
Oct  9 10:10:38 compute-0 systemd[1]: Started User Manager for UID 1000.
Oct  9 10:10:38 compute-0 systemd[218241]: Reached target Main User Target.
Oct  9 10:10:38 compute-0 systemd[218241]: Startup finished in 122ms.
Oct  9 10:10:38 compute-0 systemd[1]: Started Session 45 of User zuul.
Oct  9 10:10:38 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:38 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:38 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:38.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:38.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:38.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:38.975Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:38 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:38.976Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:39 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:39 compute-0 nova_compute[187439]: 2025-10-09 10:10:39.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:40.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:40 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28538 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:40 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28544 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:40 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18693 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:40 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:40 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:40 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:40.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:40 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18699 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:40 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28559 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:40 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28294 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:41 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:10:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 10:10:41 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945846219' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 10:10:41 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:41 compute-0 podman[218500]: 2025-10-09 10:10:41.849792953 +0000 UTC m=+0.050741782 container health_status 87f9f4f242f517060b0e0d48a2f0ddc13a84594e9c448ec932147a680bddb626 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  9 10:10:42 compute-0 nova_compute[187439]: 2025-10-09 10:10:42.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:42.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:42 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: ::ffff:192.168.122.100 - - [09/Oct/2025:10:10:42] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:10:42 compute-0 ceph-mgr[4772]: [prometheus INFO cherrypy.access.139955036600688] ::ffff:192.168.122.100 - - [09/Oct/2025:10:10:42] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Oct  9 10:10:42 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:42 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:42 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:42.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:42 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:43 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:43 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:43.598Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[2]: notify retry canceled after 8 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:43.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:43.605Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:43 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:43.606Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:43 compute-0 ovs-vsctl[218556]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  9 10:10:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:44.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:44 compute-0 virtqemud[187041]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  9 10:10:44 compute-0 virtqemud[187041]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  9 10:10:44 compute-0 virtqemud[187041]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  9 10:10:44 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:44 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:44 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:44.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:44 compute-0 nova_compute[187439]: 2025-10-09 10:10:44.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:44 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: cache status {prefix=cache status} (starting...)
Oct  9 10:10:44 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:44 compute-0 lvm[218846]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:10:44 compute-0 lvm[218846]: VG ceph_vg0 finished
Oct  9 10:10:44 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: client ls {prefix=client ls} (starting...)
Oct  9 10:10:44 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:45 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:45 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18735 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:45 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18729 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:45 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: damage ls {prefix=damage ls} (starting...)
Oct  9 10:10:45 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:45 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28610 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:10:45 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:10:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:10:45 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:10:45 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump loads {prefix=dump loads} (starting...)
Oct  9 10:10:45 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:10:45 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:10:45 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  9 10:10:45 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:45 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28339 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:45 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28625 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:45 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28631 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:45 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  9 10:10:45 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:10:45 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1403612783' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:10:45 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:10:45 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/262828082' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:10:46 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  9 10:10:46 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  9 10:10:46 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18777 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:46.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:46 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28369 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:46 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28655 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:46 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  9 10:10:46 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  9 10:10:46 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:46 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18801 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:46 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:46 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:46 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:46.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:46 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28402 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:46 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28670 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct  9 10:10:46 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2170318210' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  9 10:10:46 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: ops {prefix=ops} (starting...)
Oct  9 10:10:46 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct  9 10:10:46 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2182137390' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  9 10:10:47 compute-0 nova_compute[187439]: 2025-10-09 10:10:47.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:47 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28432 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:47 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct  9 10:10:47 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18840 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:47.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:47.139Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:47.139Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:47 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:47.139Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:47 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28706 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:47 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18858 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:47 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: session ls {prefix=session ls} (starting...)
Oct  9 10:10:47 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle Can't run that command on an inactive MDS!
Oct  9 10:10:47 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28462 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:47 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28733 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:47 compute-0 ceph-mds[24432]: mds.cephfs.compute-0.wjwyle asok_command: status {prefix=status} (starting...)
Oct  9 10:10:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:10:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/536044603' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:10:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:10:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:10:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:10:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:10:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:10:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3107918271' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:10:47 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:10:47 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-2-0-compute-0-rlqbpy[72839]: 09/10/2025 10:10:47 : epoch 68e78389 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:10:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1972161551' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:10:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct  9 10:10:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1803218710' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  9 10:10:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:48.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:10:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199241439' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:10:48 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18933 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T10:10:48.527+0000 7f49f65cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:10:48 compute-0 ceph-mgr[4772]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:10:48 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28537 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T10:10:48.579+0000 7f49f65cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:10:48 compute-0 ceph-mgr[4772]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:10:48 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.18948 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-0-lwqgfy[4768]: 2025-10-09T10:10:48.620+0000 7f49f65cf640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:10:48 compute-0 ceph-mgr[4772]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  9 10:10:48 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:48 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:48 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:48.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:48 compute-0 podman[219449]: 2025-10-09 10:10:48.672517083 +0000 UTC m=+0.107112698 container health_status 6a0b51670cf69b5798220aca8ef5fd4549a42b8eb7d6a48cd22a7d1840799e8e (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd)
Oct  9 10:10:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  9 10:10:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279356677' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  9 10:10:48 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct  9 10:10:48 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256149710' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:48.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 7 attempts: Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:48.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478304.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478304.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:48.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478302.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478302.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:48 compute-0 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-alertmanager-compute-0[33682]: ts=2025-10-09T10:10:48.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://np0005478303.shiftstack:8443/api/prometheus_receiver\": dial tcp: lookup np0005478303.shiftstack on 192.168.122.80:53: no such host"
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: log_channel(cluster) log [DBG] : pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct  9 10:10:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  9 10:10:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524732949' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  9 10:10:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct  9 10:10:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270317148' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Optimize plan auto_2025-10-09_10:10:49
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [balancer INFO root] do_upmap
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'volumes', '.rgw.root', 'vms', 'default.rgw.meta', 'images', 'backups', '.nfs', 'default.rgw.control', 'cephfs.cephfs.data']
Oct  9 10:10:49 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [balancer INFO root] prepared 0/10 upmap changes
Oct  9 10:10:49 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.19014 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28871 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:49 compute-0 nova_compute[187439]: 2025-10-09 10:10:49.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] scanning for idle connections..
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [volumes INFO mgr_util] cleaning up connections: []
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28627 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  9 10:10:49 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.19038 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:50 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28895 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:50 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28901 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:50.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:50 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.19062 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:50 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28928 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:50 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28666 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:10:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/563879597' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:10:50 compute-0 radosgw[23518]: ====== starting new request req=0x7f7346e135d0 =====
Oct  9 10:10:50 compute-0 radosgw[23518]: ====== req done req=0x7f7346e135d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:50 compute-0 radosgw[23518]: beast: 0x7f7346e135d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:50.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:50 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.19095 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:50 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28949 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:50 compute-0 ceph-mgr[4772]: log_channel(audit) log [DBG] : from='client.28952 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  9 10:10:50 compute-0 ceph-mon[4497]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:10:50 compute-0 ceph-mon[4497]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/852438262' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 360448 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 352256 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fcac0000/0x0/0x4ffc00000, data 0xccde3/0x15b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 352256 heap: 77553664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19(unlocked)] enter Initial
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=0 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=0 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000092 1 0.000033
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000134 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 116 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 116 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.884045 2 0.000049
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.884219 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.884279 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=0 lpr=116 pi=[79,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000369 1 0.000534
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000113 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 117 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 327680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 809264 data_alloc: 218103808 data_used: 110592
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:10:50 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.004743 6 0.000492
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 40'189 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001574 3 0.000203
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 40'189 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 40'189 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000066 1 0.000083
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 lc 40'189 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049863 1 0.000037
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 1368064 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fcab5000/0x0/0x4ffc00000, data 0xd2f8f/0x164000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.954766 1 0.000084
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.006457 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.011390 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] r=-1 lpr=117 pi=[79,117)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000222 1 0.000295
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000094 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000214
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:50 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=40
Oct  9 10:10:50 compute-0 ceph-osd[12528]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=40
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000738 3 0.000065
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 1351680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995898 2 0.000110
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996784 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 120 handle_osd_map epochs [119,120], i have 120, src has [1,120]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=119/79 les/c/f=120/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001263 3 0.000091
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=119/79 les/c/f=120/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=119/79 les/c/f=120/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=119/120 n=5 ec=53/34 lis/c=119/79 les/c/f=120/80/0 sis=119) [1] r=0 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 1343488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 1343488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 1318912 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833800 data_alloc: 218103808 data_used: 106496
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.982757568s of 10.031836510s, submitted: 60
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 1302528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1294336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fcaae000/0x0/0x4ffc00000, data 0xd8f11/0x16e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 121 heartbeat osd_stat(store_statfs(0x4fcaae000/0x0/0x4ffc00000, data 0xd8f11/0x16e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 1294336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 1286144 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b(unlocked)] enter Initial
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=0 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=0 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000027
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000095 1 0.000045
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000190 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 122 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 122 handle_osd_map epochs [122,123], i have 123, src has [1,123]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.433377 2 0.000103
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.433589 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.433613 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=0 lpr=122 pi=[84,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000043 1 0.000073
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 123 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 1220608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852043 data_alloc: 218103808 data_used: 106496
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 1196032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:10:50 compute-0 ceph-osd[12528]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.930692 5 0.000209
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.001850 4 0.000087
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000037 1 0.000053
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.014743 1 0.000020
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 1196032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.256516 1 0.000021
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.273277 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.204261 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] r=-1 lpr=123 pi=[84,123)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000662 1 0.000864
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000111 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000218
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:50 compute-0 ceph-osd[12528]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Oct  9 10:10:50 compute-0 ceph-osd[12528]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000529 3 0.000055
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fca9c000/0x0/0x4ffc00000, data 0xe31ef/0x17e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 1171456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001474 2 0.000444
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002545 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 126 handle_osd_map epochs [125,126], i have 126, src has [1,126]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=125/84 les/c/f=126/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001393 3 0.000805
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=125/84 les/c/f=126/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=125/84 les/c/f=126/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000040 0 0.000000
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=125/126 n=5 ec=53/34 lis/c=125/84 les/c/f=126/85/0 sis=125) [1] r=0 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Oct  9 10:10:50 compute-0 ceph-osd[12528]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 1155072 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 1155072 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862978 data_alloc: 218103808 data_used: 110592
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 1155072 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 126 handle_osd_map epochs [127,128], i have 126, src has [1,128]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.363139153s of 10.403436661s, submitted: 44
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1130496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 1122304 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fca94000/0x0/0x4ffc00000, data 0xe93a3/0x187000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 1114112 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1064960 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877780 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 131 ms_handle_reset con 0x563ba754d400 session 0x563ba81cd4a0
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1064960 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fca8b000/0x0/0x4ffc00000, data 0xef454/0x190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 131 handle_osd_map epochs [132,133], i have 131, src has [1,133]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 131 handle_osd_map epochs [132,133], i have 133, src has [1,133]
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 1048576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 1048576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 999424 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884900 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 991232 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 983040 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884900 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.910986900s of 14.925504684s, submitted: 16
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 974848 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 966656 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884452 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 942080 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 933888 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba96ebc00 session 0x563ba9dd0b40
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba768d000 session 0x563ba831fe00
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba813e800 session 0x563ba91e8d20
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 925696 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 917504 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885980 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 901120 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 892928 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 892928 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 884736 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 884736 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885980 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.982664108s of 14.991124153s, submitted: 10
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 884736 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 876544 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 876544 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 868352 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 868352 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886080 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 843776 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 835584 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 835584 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 819200 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886096 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.519762039s of 10.531422615s, submitted: 11
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba754d000 session 0x563ba92434a0
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886080 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 712704 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 712704 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 688128 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 688128 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 679936 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.465995789s of 29.472551346s, submitted: 5
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 638976 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 573440 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 540672 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 540672 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 540672 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 507904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 507904 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 499712 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 499712 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 491520 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 483328 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba754d400 session 0x563baa26cb40
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 483328 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 475136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 475136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 475136 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 466944 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 458752 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 458752 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.456504822s of 50.457698822s, submitted: 1
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 442368 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 434176 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885241 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 425984 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 425984 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 425984 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 417792 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 417792 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885241 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 409600 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 401408 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885241 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 401408 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 368640 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 360448 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.996906281s of 17.000619888s, submitted: 4
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 360448 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 344064 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 344064 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 335872 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 335872 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 335872 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 327680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 327680 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 319488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 319488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 319488 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 311296 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 311296 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 311296 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 303104 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 303104 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 294912 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 294912 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 294912 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 278528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 278528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 278528 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 270336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 ms_handle_reset con 0x563ba6bff400 session 0x563ba74dfa40
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 270336 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 262144 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 253952 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 245760 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 245760 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 237568 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885109 data_alloc: 218103808 data_used: 114688
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78372864 unmapped: 229376 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.124507904s of 34.125598907s, submitted: 1
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 221184 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 212992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 212992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 212992 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 204800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 204800 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 196608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 196608 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 188416 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 180224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885225 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 163840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.921249390s of 14.924689293s, submitted: 3
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 180224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 180224 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 172032 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 163840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 163840 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 155648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 155648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 155648 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 147456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 147456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 139264 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 139264 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 122880 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 114688 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 114688 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 106496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 122880 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 122880 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 114688 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 106496 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 98304 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78503936 unmapped: 98304 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 90112 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 90112 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 90112 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 81920 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 81920 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 73728 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 73728 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 73728 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 65536 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:50 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:50 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 65536 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 57344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 57344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 57344 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 49152 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 49152 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 49152 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 40960 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 32768 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:51 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 24576 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 16384 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:51 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 8192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 8192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 8192 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 0 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 0 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1024000 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1024000 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1024000 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 999424 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 999424 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-0 ceph-osd[12528]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885093 data_alloc: 218103808 data_used: 118784
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-0 ceph-osd[12528]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fca86000/0x0/0x4ffc00000, data 0xf33ad/0x196000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Oct  9 10:10:51 compute-0 ceph-osd[12528]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
